Anda di halaman 1dari 6

International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 1676



Secure Storage Services In Cloud
Computing
S. Muthakshi
#1
, Dr. T.Meyyappan M.Sc., MBA. M.Phil., Ph.d.,
*2

# Department of Computer Science and Engineering, Alagappa University, Karaikudi, Tamilnadu,India.




Abstract This Cloud storage service has made the users
to access their data anywhere anytime without any
trouble. Available systems that provide support for the
remote data integrity are useful for quality of service
testing but do not deal with server failure or handling
misbehaving servers. The proposed system ensures storage
integrity in the server where the cloud users data is
stored. It achieves strong cloud storage security and fast
data error localization with the results provided by the
auditing mechanism that is carried out by the Third Party
Auditor. Also it further supports secure and efficient
dynamic operations on outsourced data. Third Party
Auditor carries out the public auditing in-order to
maintain the integrity for the data stored in cloud. The
user delegates the integrity checking tasks of the data
stored in the cloud storage to the Third Party Auditor,
who then does the auditing process. Erasure correcting
code is used in the file distribution and dependability
against Byzantine failure. Data integrity in ensured with
the help of verification key along with erasure coded data
which also allows handling of storage correctness and
identification of misbehaving cloud server. The audit
protocol blocker is introduced to monitor the correctness
of the user and Third Party Auditor. It prevents the cloud
users from misusing the privileges that are provided to
them by the cloud server.

Keywords Data storage, Cloud Service Provider, TPA.
I. INTRODUCTION
This Several trends are opening up the era of Cloud
Computing, which is an Internet-based development and use
of computer technology. The ever cheaper and more powerful
processors, together with the software as a service (SaaS)
computing architecture, are transforming data centers into
pools of computing service on a huge scale. The increasing
network bandwidth and reliable yet flexible network
connections make it even possible that users can now
subscribe high quality services from data and software that
reside solely on remote data centers.

ARCHITECTURE OF CLOUD STORAGE SYSTEM:

Fig:1 CLOUD STORAGE SYSTEM
This architecture represents the job of the third party auditor.

Moving data into the cloud offers great convenience to
users since they dont have to care about the complexities of
direct hardware management. The pioneer of Cloud
Computing vendors, Amazon Simple Storage Service (S3) and
Amazon Elastic Compute Cloud EC2) are both well known
examples. While these internet-based online services do
provide huge amounts of storage space and customizable
computing resources, this computing platform shift, however,
is eliminating the responsibility of local machines for data
maintenance at the same time. As a result, users are at the
mercy of their cloud service providers for the availability and
integrity of their data. On the one hand, although the cloud
infrastructures are much more powerful and reliable than
personal computing devices, broad range of both internal and
external threats for data integrity still exist. Examples of
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 1677

outages and data loss incidents of noteworthy cloud storage
services appear from time to time. On the other hand, since
users may not retain a local copy of outsourced data, there
exist various incentives for cloud service providers (CSP) to
behave unfaithfully towards the cloud users regarding the
status of their outsourced data.
In order to achieve the assurances of cloud data integrity
and availability and enforce the quality of cloud storage
service, efficient methods that enable on-demand data
correctness verification on behalf of cloud users have to be
designed. However, the fact that users no longer have physical
possession of data in the cloud prohibits the direct adoption of
traditional cryptographic primitives for the purpose of data
integrity protection. Hence, the verification of cloud storage
correctness must be conducted without explicit knowledge of
the whole data files. Meanwhile, cloud storage is not just a
third party data warehouse. The data stored in the cloud may
not only be accessed but also be frequently updated by the
users, including insertion, deletion, modification, appending,
etc. Thus, it is also imperative to support the integration of
this dynamic feature into the cloud storage correctness
assurance, which makes the system design even more
challenging. Last but not the least, the deployment of Cloud
Computing is powered by data centers running in a
simultaneous, cooperated and distributed manner. These
techniques, while can be useful to ensure the storage
correctness without having users possessing local data, are all
focusing on single server scenario. They may be useful for
quality-of-service testing, but does not guarantee the data
availability in case of server failures.
Although direct applying these techniques to distributed
storage (multiple servers) could be straightforward, the
resulted storage verification overhead would be linear to the
number of servers. As a complementary approach, researchers
have also proposed distributed protocols for ensuring storage
correctness across multiple servers or peers. However, while
providing efficient cross server storage verification data
availability insurance; these schemes are all focusing on static
or archival data. As a result, their capabilities of handling
dynamic data remains unclear, which inevitably limits their
full applicability in cloud storage scenarios.

A simple approach like message authentication codes
(MACs) can be used to protect the data integrity.Data owners
will initially locally maintain a small amount of MACs for the
data files which are to be outsourced. The data owner can
verify the integrity by recalculating the MAC of the received
data file when he/she wants to retrieve data and will compare
it to the local precomputed value. Even though this method
allows data owners to verify the correctness of the received
data from the cloud, but if the data file is large, MACs cannot
be employed. A hash tree can be employed for large data files,
in which leaves contains hashes of data blocks and internal
contains hashes of their children of the tree.
To authenticate his received data the data owner has to
store the root hash of the tree. But it does not give any
assurance about the correctness of other outsourced data. So to
perform this thing for data owner TPA can be used. Various
mechanisms are proposed on how to use the TPA so that it can
relieve the burden of data owner for local data storage and
maintenance, it also eliminates their physical control of
storage dependability and security, which traditionally has
been expected by both individuals and enterprises with high
service-level requirements. This kind of audit service not only
helps save data ownerscomputation resources but also
provides a transparent yet cost- effective method for data
owners to gain trust in the cloud.
The presence of TPA eliminates the involvement of the
client by auditing whether his data stored in the cloud are
indeed intact, which can be important in achieving economies
of scale for Cloud Computing. Though this method states how
to save the computational resource and cost of storage of
owners data buthow to trust on TPA that is not calculated. If
TPA modifies data or deletes some dataand if it becomes
intrusive and pass information of data owner to unauthorized
user than how owner know about this problem is not solved.
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 1678

Thus, new approaches are required to solve the above
problem. The author Abhishek Mohta and R. Sahu have given
algorithm which ensures data integrity and dynamic data
operations. They have used encryption and message digest to
ensure data integrity. Although encryption ensures that data is
not leaked while transfer and message digest gives identity of
client who has send data. They have designed algorithm for
data manipulation, insertion of record and record deletion.
Insertion and manipulation algorithms inserts and manipulate
data efficiently but in data deletion we cant identify the
person who have deleted record, how and when means if any
one deletes record then this algorithm can no longer work. In
that case we can use indexing scheme i.e. if we trace every
record by index, that when and which user is accessing record
then if user tries to delete record then we can identify him as
we have traced him by index.
The author Ateniese et al. are the first who have considered
the public adaptability in their definedprovable data
possession (PDP) method which ensures possession of data
files on untrusted storages. For auditing outsourced data their
technique utilizes the RSA-based homomorphic authenticators
and suggests to randomly sample a few blocks of the file.
However,in their scheme the public auditability demands the
linear combination of sampled blocks which exposed to the
external auditor. When used directly, their protocol is not
provably privacy preserving, and thus may leak user data
information to the auditor. The author Cong Wang et al. used
the public key based homomorphic authenticator and to
achieve a privacy-preserving public auditing system for cloud
data storage security while keeping all above requirements in
mind, it uniquely integrate it with random mask technique.
For efficiently handling multiple auditing tasks,the
technique of bilinear aggregate signature can be explored to
extend the main result into a multi-user setting, where TPA
can perform multiple auditing tasks simultaneously. A keyed
hash function hk(F) is used in Proof of retrievability (POR)
scheme. The verifier, pre-computes the cryptographic hash of
F using hk(F) before archiving the data file F in the cloud
storage, and stores this hash as well as the secret key K. The
verifier releases the secret key K to the cloud archive to check
the integrity of the file F and asks it to compute and return the
value of hk(F). The verifier can check for the integrity of the
file F for multiple times by storing multiple hash values for
different keys, each one being an independent proof. Although
this scheme is very simple and easily implementable the main
drawback of this scheme is that it requires higher resource
costs for the implementation. Verifier has to store as many
keys as the number of checks it wants to perform as well as
the hash value of the data file F with each hash key.
Computation of the hash value for even a moderately large
data files can be computationally burdensome for some clients
(PDAs, mobile phones, etc.).
Each invocation of the protocol at archive requires the
archive to process the entire file F. This processing can be
computationally burdensome for the archive even for a
lightweight operation like hashing. Furthermore, it requires
the prover to read the entire file F - a significant overhead for
an archive whose intended load is only an occasional read per
file, where every file to be tested frequently . The author Ari
Juels and Burton S. Kaliski Jr proposed a scheme Proof of
retrievability for large files using sentinels. In this scheme,
only a single key can be used irrespective of the size of the
file or the number of files unlike in the key-hash approach
scheme in which many number of keys are used.
III PROPOSED METHOD
If any of a cloud gets affected means with the help of
the third party auditor (TPA), we easily recover the datas
from cloud. We measure cloud correctness based on integrity
auditing mechanism it helps to secure and efficient dynamic
operations on outsourced data, including block modification,
deletion, and append. We ensure our datas always going to be
a right one if any mismatch occurs means we easily find out
where ever corruption are made based on that we recover the
from cloud as much as possible. The user does not have the
time to perform the storage correctness verification. He can
optionally delegate this task to an independent third party
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 1679

auditor, making the cloud storage publicly verifiable. To
achieve the assurances of cloud data integrity and availability
and enforce the quality of dependable cloud storage service
for users, we propose an effective and flexible distributed
scheme with explicit dynamic data support, including block
update, delete, and append. Append Operation means the user
may want to increase the size of his stored data by adding
blocks at the end of the data file, which we refer as data
append. Sometimes, after being stored in the cloud, certain
data blocks may need to be deleted. The delete operation we
are considering is a general one, in which user replaces the
data block with zero or some special reserved data symbol.

PROPOSED ALGORITHM:

A) Challenge Token Precomputation

In order to achieve assurance of data storage correctness and
data error localization simultaneously, our scheme entirely
relies on the precomputed verification tokens. The main idea
is as follows: before file distribution the user precomputes a
certain number of short verification tokens on individual
vector Gj j 2 f1; . . . ; ng, each token covering a random
subset of data blocks. Later, when the user wants to make sure
the storage correctness for the data in the cloud, he challenges
the cloud servers with a set of randomly generated block
indices. Upon receiving challenge, each cloud server
computes a short signature over the specified blocks and
returns them to the user. The values of these signatures should
match the corresponding tokens precomputed by the user.
Meanwhile, as all servers operate over the same subset of the
indices, the requested response values for integrity check must
also be a valid codeword determined by the secret matrix P.

B) Correctness Verification and Error Localization

Error localization is a key prerequisite for
eliminating errors in storage systems. It is also of critical
importance to identify potential threats from external attacks.
However, many previous schemes do not explicitly consider
the problem of data error localization, thus only providing
binary results for the storage verification. Our scheme
outperforms those by integrating the correctness verification
and error localization (misbehaving server identification) in
our challenge-response protocol: the response values from
servers for each challenge not only determine the correctness
of the distributed storage, but also contain information to
locate potential data error(s).

C) File Retrieval and Error Recovery

Since our layout of file matrix is systematic, the user can
reconstruct the original file by downloading the data vectors
from the first m servers, assuming that they return the correct
response values. Notice that our verification scheme is based
on random spot-checking, so the storage correctness assurance
is a probabilistic one. However, by choosing system
parameters e:g:; r; l; t appropriately and conducting enough
times of verification, we can guarantee the successful file
retrieval with high probability. On the other hand, whenever
the data corruption is detected, the comparison of
precomputed tokens and received response values can
guarantee the identification of misbehaving server(s) (again
with high probability), which will be discussed shortly.
Therefore, the user can always ask servers to send back blocks
of the r rows specified in the challenge and regenerate the
correct blocks by erasure correction, shown in Algorithm 3, as
long as the number of identified misbehaving servers is less
than k. (otherwise, there is no way to recover the corrupted
blocks due to lack of redundancy, even if we know the
position of misbehaving servers.) The newly recovered blocks
can then be redistributed to the misbehaving servers to
maintain the correctness of storage.



International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 1680


ADVANTAGES:
By using these algorithms Storage correctness is to
ensure the users data are indeed stored appropriately and kept
intact all the time in the cloud.
The main advantage of this proposed design allows the user to
audit the cloud storage with light- weight communication and
computation cost.
Here we are using third party auditor to ensure the security.

IV THE STEPS OF ALGORITHM

Algorithm for Storage Correctness and Error Localization
1. while CHALLENGE (i)
2. Re compute i= fkchal(i) and kprp(i) from KPRP
3. Send {i , kprp } to all cloud server
4. Receive from servers
{Ri (j) ==1I * G (j) [kprp (q)]| {1 j n}
5. for j=m+1 to n do,
6. Ri(j) =Ri(j)- =1j(SI q,j ) .iq where I q=
kprp (q)
7. end for
If( (Ri(1),..,Ri(m)).P==(Ri(m+1),., Ri(n)) then
8. Accept and ready for the next challenge
9. else
10. For j=1 to n do
11. if Ri(j) not equal to vi(j) then
12. Return server j is misbehaving
13. end if
14. end for
15. end if
16. end while

Algorithm for Error Recovery
1. Procedure
% Assume the block corruption have been detected
among
% the specified r rows;
% assume s k servers have been identified
misbehaving
2. Download r rows of blocks from servers;
3. Treat s servers as erasures and recover the blocks.
4. Resend the recovered blocks to corresponding
servers
5. End procedure.

V CONCLUSION

In this paper, we investigate the problem of data
security in cloud data storage, which is essentially a
distributed storage system. To achieve the assurances of cloud
data integrity and availability and enforce the quality of
dependable cloud storage service for users, we propose an
effective and flexible distributed scheme with explicit
dynamic data support, including block update, delete, and
append. By utilizing the homomorphic token with distributed
verification of erasure-coded data, our scheme achieves the
integration of storage correctness insurance and data error
localization, i.e., whenever data corruption has been detected
during the storage correctness verification across the
distributed servers, we can almost guarantee the simultaneous
identification of the misbehaving servers. We show that our
scheme is highly efficient and resilient to Byzantine failure,
malicious data modification attack, and even server colluding
attacks. Considering the time, computation resources, and
even the related online burden of users, we also provide the
extension of the proposed main scheme to support third-party
auditing, where users can safely delegate the integrity
checking tasks to third-party auditors and be worry-free to use
the cloud storage services.
REFERENCES
1] C. Wang, Q. Wang, K. Ren, and W. Lou, "Ensuring Data
Storage Security in Cloud Computing," Proc. 17th Int'l
Workshop Quality of Service (IWQoS '09), pp. 1-9, July 2009.
[2] Amazon.com, "Amazon Web Services (AWS)," http:/aws.
amazon.com,2009.
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 1681

[3] Sun Microsystems, Inc., "Building Customer Trust in
Cloud Computing with Transparent Security,"
https://www.sun.com/offers/detailssun_transparency.xml ,
Nov.2009.
[4] K. Ren, C. Wang, and Q. Wang, "Security Challenges for
the Public Cloud," IEEE Internet Computing, vol. 16, no. 1,
pp.69-73,2012.
[5] M. Arrington, "Gmail Disaster: Reports of Mass Email
Deletions," http://www.techcrunch.com/2006/12/28gmail-
disasterreports-of-mass-email-deletions ,Dec.2006.
[7] Amazon.com, "Amazon S3 Availability Event: July 20,
2008," http://status.aws.amazon.coms3-20080720.html , July
2008.
[8] S. Wilson, "Appengine Outage," http://www.cio-
weblog.com/50226711appengine_outage.php , June 2008.
[9] B. Krebs, "Payment Processor Breach May Be Largest
Ever," http://voices.washingtonpost.com/securityfix/ 2009/01
payment_processor_breach_may_b.html ,Jan.2009.
[10] A. Juels and B.S. KaliskiJr., "PORs: Proofs of
Retrievability for Large Files," Proc. 14th ACM Conf.
Computer and Comm. Security (CCS '07), pp. 584-597, Oct.
2007.
[11] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L.
Kissner, Z. Peterson, and D. Song, "Provable Data Possession
at Untrusted Stores," Proc. 14th ACM Conf. Computer and
Comm. Security (CCS '07), pp. 598-609, Oct. 2007.
[12] M.A. Shah, M. Baker, J.C. Mogul, and R. Swaminathan,
"Auditing to Keep Online Storage Services Honest," Proc.
11th USENIX Workshop Hot Topics in Operating Systems
(HotOS '07), pp. 1-6, 2007.
[13] M.A. Shah, R. Swaminathan, and M. Baker, "Privacy-
Preserving Audit and Extraction of Digital Contents,"
Cryptology ePrint Archive, Report 2008/186,
http:/eprint.iacr.org, 2008.
[14] G. Ateniese, R.D. Pietro, L.V. Mancini, and G. Tsudik,
"Scalable and Efficient Provable Data Possession," Proc.
Fourth Int'l Conf. Security and Privacy in Comm. Netowrks
(SecureComm '08), pp. 1-10, 2008.
[15] Q. Wang, C. Wang, J. Li, K. Ren, and W. Lou, "Enabling
Public Verifiability and Data Dynamics for Storage Security
in Cloud Computing," Proc. 14th European Conf. Research in
Computer Security (ESORICS '09), pp. 355-370, 2009.A.
Juels and J. Burton S. Kaliski, Pors: Proofs of retrievability
for large files, in Proc. of CCS07, Alexandria, VA, October
2007, pp. 584597.

Anda mungkin juga menyukai