The document provides a report on proposing a decentralized information accountability framework to track how user data is used in the cloud. It analyzes existing cloud service models and outlines the objectives, scope, system analysis, design and feasibility of the proposed framework. The system analysis section describes challenges with current single-server systems and outlines modules for data integrity, security, and distributed storage across multiple servers. A feasibility study examines the technical, social, and economic viability of the project. The system design section provides diagrams to model the data flow, entity relationships, system workflows and use cases of the proposed accountability framework.
Visual cryptography is a secret sharing scheme that allows for the encryption of written text or images in a perfectly secure way without any computation. It works by dividing the secret into multiple shares, where only when a sufficient number of shares are superimposed can the secret be revealed to the human visual system. For example, in a 2 by 2 scheme, a secret image is encoded into two shares such that individually the shares reveal no information, but when overlayed together the secret image is revealed, though with some loss of contrast and resolution. Visual cryptography has applications in security, watermarking, and remote voting.
Image steganography is the art of hiding information within digital images. The document discusses various techniques for image steganography including LSB (least significant bit) and DCT (discrete cosine transform). LSB is a simple spatial domain technique that replaces the least significant bits of image pixels with bits of a secret message. DCT operates in the frequency domain by transforming image blocks and hiding data in the mid-frequency DCT coefficients. The document compares the advantages and disadvantages of these techniques, and discusses their applications for hiding private information or digital watermarking. Metrics for analyzing steganography systems like bit error rate, mean square error, and peak signal to noise ratio are also introduced.
SECRY - Secure file storage on cloud using hybrid cryptographyALIN BABU
Final project presentation of Final year B.tech CSE Project APJ Abdul Kalam Technological University.
About the project
Cloud computing has now become a major trend, it is a new data hosting technology that is very popular in recent years. In this project, we are developing an web application that can securely store the files to a cloud server. We proposes a system that uses hybrid cryptography technique to securely store the data in cloud. The hybrid approach when deployed in cloud environment makes the remote server more secure and thus, helps the users to fetch more trust of their data in the cloud. For data security and privacy protection issues, the fundamental challenge of separation of sensitive data and access control is fulfilled. Cryptography technique translates original data into unreadable format. This technique uses keys for translate data into unreadable form. So only authorized person can access data from cloud server.
We provide a cloud storage that uses multiple crypotraphic technique which is known by hybrid cryptography. The product provides confidentiality by using security for both upload and download. The data will be secured since we use multi level security techniques and multiple servers for storage.
02 Types of Computer Forensics Technology - NotesKranthi
The document discusses various types of computer forensics technology used by law enforcement, military, and businesses. It describes the Computer Forensics Experiment 2000 (CFX-2000) which tested an integrated forensic analysis framework to determine motives and identity of cyber criminals. It also discusses specific computer forensics software tools like SafeBack for creating evidence backups and Text Search Plus for quickly searching storage media for keywords. The document provides details on different types of computer forensics technology used for remote monitoring, creating trackable documents, and theft recovery.
This document describes a student project to develop a cloud-based data storage application. The project is motivated by the student's interest in cloud computing and Java, and their desire to understand the development process. The aim is to create software that allows users to securely store data on a centralized server and access it from any device with an Internet connection. The product will work across different devices by having both a website and mobile app. It will allow users to sign up, upload and download files, and access their data from any computer on the cloud.
Digital evidence acquisitions can be stored in raw, proprietary, or Advanced Forensics Format (AFF). The document discusses various acquisition methods and tools for disk-to-image, disk-to-disk, logical, and sparse acquisitions. It emphasizes the importance of validation, contingency planning, and minimizing alteration of evidence during the acquisition process. Special considerations are given for acquiring data from RAID systems and using Linux tools or remote network tools.
The document discusses cloud computing and data security. It provides an overview of cloud computing including deployment models, service models, and sub-service models. It also discusses key aspects of cloud data security such as authentication using OTP, encryption of data using strong algorithms, and ensuring data integrity through hashing. The proposed cloud data security model uses three levels of defense - strong authentication through OTP, automatic encryption of data using a fast and strong algorithm, and fast recovery of user data.
This document is a project report submitted by four students to fulfill the requirements for a Bachelor of Technology degree in Information Technology. The report discusses steganography, which is hiding secret information within other information. Specifically, the report focuses on digital image steganography, where secret messages are hidden within digital images. The report provides an introduction to steganography, a literature review on related topics like cryptography, an analysis of requirements, descriptions of how image steganography works and algorithms used, system design diagrams, implementation details, applications of the system, and directions for future work.
Visual cryptography is a secret sharing scheme that allows for the encryption of written text or images in a perfectly secure way without any computation. It works by dividing the secret into multiple shares, where only when a sufficient number of shares are superimposed can the secret be revealed to the human visual system. For example, in a 2 by 2 scheme, a secret image is encoded into two shares such that individually the shares reveal no information, but when overlayed together the secret image is revealed, though with some loss of contrast and resolution. Visual cryptography has applications in security, watermarking, and remote voting.
Image steganography is the art of hiding information within digital images. The document discusses various techniques for image steganography including LSB (least significant bit) and DCT (discrete cosine transform). LSB is a simple spatial domain technique that replaces the least significant bits of image pixels with bits of a secret message. DCT operates in the frequency domain by transforming image blocks and hiding data in the mid-frequency DCT coefficients. The document compares the advantages and disadvantages of these techniques, and discusses their applications for hiding private information or digital watermarking. Metrics for analyzing steganography systems like bit error rate, mean square error, and peak signal to noise ratio are also introduced.
SECRY - Secure file storage on cloud using hybrid cryptographyALIN BABU
Final project presentation of Final year B.tech CSE Project APJ Abdul Kalam Technological University.
About the project
Cloud computing has now become a major trend, it is a new data hosting technology that is very popular in recent years. In this project, we are developing an web application that can securely store the files to a cloud server. We proposes a system that uses hybrid cryptography technique to securely store the data in cloud. The hybrid approach when deployed in cloud environment makes the remote server more secure and thus, helps the users to fetch more trust of their data in the cloud. For data security and privacy protection issues, the fundamental challenge of separation of sensitive data and access control is fulfilled. Cryptography technique translates original data into unreadable format. This technique uses keys for translate data into unreadable form. So only authorized person can access data from cloud server.
We provide a cloud storage that uses multiple crypotraphic technique which is known by hybrid cryptography. The product provides confidentiality by using security for both upload and download. The data will be secured since we use multi level security techniques and multiple servers for storage.
02 Types of Computer Forensics Technology - NotesKranthi
The document discusses various types of computer forensics technology used by law enforcement, military, and businesses. It describes the Computer Forensics Experiment 2000 (CFX-2000) which tested an integrated forensic analysis framework to determine motives and identity of cyber criminals. It also discusses specific computer forensics software tools like SafeBack for creating evidence backups and Text Search Plus for quickly searching storage media for keywords. The document provides details on different types of computer forensics technology used for remote monitoring, creating trackable documents, and theft recovery.
This document describes a student project to develop a cloud-based data storage application. The project is motivated by the student's interest in cloud computing and Java, and their desire to understand the development process. The aim is to create software that allows users to securely store data on a centralized server and access it from any device with an Internet connection. The product will work across different devices by having both a website and mobile app. It will allow users to sign up, upload and download files, and access their data from any computer on the cloud.
Digital evidence acquisitions can be stored in raw, proprietary, or Advanced Forensics Format (AFF). The document discusses various acquisition methods and tools for disk-to-image, disk-to-disk, logical, and sparse acquisitions. It emphasizes the importance of validation, contingency planning, and minimizing alteration of evidence during the acquisition process. Special considerations are given for acquiring data from RAID systems and using Linux tools or remote network tools.
The document discusses cloud computing and data security. It provides an overview of cloud computing including deployment models, service models, and sub-service models. It also discusses key aspects of cloud data security such as authentication using OTP, encryption of data using strong algorithms, and ensuring data integrity through hashing. The proposed cloud data security model uses three levels of defense - strong authentication through OTP, automatic encryption of data using a fast and strong algorithm, and fast recovery of user data.
This document is a project report submitted by four students to fulfill the requirements for a Bachelor of Technology degree in Information Technology. The report discusses steganography, which is hiding secret information within other information. Specifically, the report focuses on digital image steganography, where secret messages are hidden within digital images. The report provides an introduction to steganography, a literature review on related topics like cryptography, an analysis of requirements, descriptions of how image steganography works and algorithms used, system design diagrams, implementation details, applications of the system, and directions for future work.
This document provides an overview of email forensics techniques and tools used in network forensics investigations. It discusses the typical architecture of email systems and protocols like SMTP, POP, and IMAP. Key points covered include email headers, the information contained in Received headers, and how an email travels from sender to recipient through various mail servers. Spoofing emails is also briefly explained. The document aims to introduce investigators to analyzing email evidence at different layers of the network and tools needed for forensic analysis of email messages and server logs.
Encryption, steganography, data hiding, artifact wiping, and trail obfuscation are anti-forensic techniques used to hide digital evidence and make forensic investigations difficult. These techniques aim to conceal criminal activity by hiding data in places that are hard to find and modify file metadata and attributes to cast doubt on evidence. While some argue these methods help improve forensic procedures, they are generally considered malicious since they are designed to cover up illegal acts and prevent authorities from proving criminal cases.
This document discusses types of attacks on computer and network security. It defines passive and active attacks. Passive attacks monitor systems without interaction and include interception and traffic analysis attacks. Interception involves unauthorized access to messages. Traffic analysis examines communication patterns. Active attacks make unauthorized changes and include masquerade, interruption, fabrication, session replay, modification, and denial of service attacks. Masquerade involves assuming another user's identity. Interruption obstructs communication. Fabrication inserts fake messages. Session replay steals login information. Modification alters packet addresses or data. Denial of service deprives access by overwhelming the target.
This document discusses privacy protection issues in cloud computing. It begins by defining cloud computing and privacy protection. The main privacy issues in cloud computing are lack of physical control over data, difficulty tracking and protecting all copies of data, and legal problems due to varying privacy laws across regions. The document proposes using a privacy manager software to help users obfuscate sensitive metadata attributes before sharing data in the cloud. This allows users to set preferences and personae to control how their personal data is handled and used by cloud services.
Visual cryptography is a cryptographic technique that allows visual information like images and text to be encrypted in a way that decryption does not require a computer and is instead a mechanical operation performed by the human visual system. It was pioneered in 1994 by Moni Naor and Adi Shamir. The technique works by breaking an image into shares such that individual shares reveal no information about the original image but combining the shares allows the image to be revealed. For example, in a 2 out of 2 visual cryptography scheme each pixel is broken into 4 subpixels distributed randomly across 2 shares such that stacking the shares recovers the original pixel value. Visual cryptography finds applications in secure identification and communication.
Design of security architecture in Information Technologytrainersenthil14
This document discusses the key components of an example security architecture, including spheres of security with three layers of information protection, three levels of control (managerial, operational, and technical), defense in depth through layered security policies, training, technology, and redundancy, and a security perimeter defined by firewalls, a DMZ, proxy servers, and an intrusion detection and prevention system to protect internal systems from outside attacks. The presentation was given by K. Senthil Kumar, an assistant professor at Sri Eshwar College of Engineering.
The document discusses the components of an information security blueprint, including policies, standards, practices, and a security education program. It describes developing an enterprise security policy and issue-specific policies. The blueprint provides a plan for security controls, technologies, and training to ensure the organization's information is protected. It is the basis for designing and implementing all aspects of the security program.
This document provides an overview of steganography. It discusses how steganography hides messages within carriers so that the message is concealed. The document then discusses the history of steganography dating back to ancient Greece. It also discusses modern uses of steganography during the Cold War and by terrorist groups. The document outlines the objectives of the study which are to provide security during message transmission. It then discusses steganography techniques like the LSB algorithm and provides snapshots of its implementation. Finally, it discusses the results of using LSB steganography and concludes with possibilities for further enhancement.
Cryptographic Hash Functions, their applications, Simple hash functions, its requirements and security, Hash functions based on Cipher Block Chaining, Secure Hash Algorithm (SHA)
This document provides an overview of securing the storage infrastructure. It describes a framework for storage security that focuses on accountability, confidentiality, integrity, and availability. It also discusses the risk triad of threats, assets, and vulnerabilities. Specific security domains for storage are identified as application access, management access, and backup/recovery/archive. The chapter focuses on analyzing each domain to identify vulnerabilities and appropriate security controls.
This document discusses steganography, which is hiding messages within seemingly harmless carriers or covers so that no one apart from the intended recipient knows a message has been sent. It provides examples of steganography in text, images, and audio, as well as methods used for each. These include techniques like least significant bit insertion and temporal sampling rates. The document also covers steganalysis, which aims to detect hidden communications by analyzing changes in the statistical properties of covers.
security
,
system
,
introduction
,
threats to computer system
,
computer
,
security
,
types of software
,
system software
,
bios
,
need of an operating system
,
major functions of operating system
,
types of operating system
,
language
,
processor
,
application software
,
thank you
This document discusses IP security (IPSec) protocols. IPSec is used to secure IP communications by authenticating and encrypting IP packets. It provides data integrity, authentication, and confidentiality. IPSec includes protocols like Authentication Header (AH) and Encapsulating Security Payload (ESP) to provide security services like data integrity, data authentication, and confidentiality. It also uses the Internet Key Exchange (IKE) for automated key management and Security Associations (SAs) to identify security parameters for authenticated secure communication.
This document discusses password-based cryptography and common attacks on passwords. It introduces password-based authentication techniques that use hashing, salting, and iteration counts to strengthen passwords. Key derivation functions are used to generate cryptographic keys from passwords. Common countermeasures against online and offline dictionary attacks are also presented, such as delayed responses, account locking, pricing via processing time, and public key cryptography.
This document discusses security issues related to moving from single cloud to multi-cloud environments. It first provides background on the increased use of cloud computing and the privacy and security concerns organizations have in using single cloud providers. It then discusses the trend toward multi-cloud/inter-cloud environments to address issues like availability and potential insider threats. The document examines research on security issues in single and multi-cloud environments and outlines the objective to automatically block attackers and securely compute data across clouds.
Information Security Principles - Access Controlidingolay
The document discusses various concepts related to access controls and authentication methods in information security. It covers identification, authentication, authorization, accountability and different authentication factors like something you know, something you have, something you are. It also discusses access control models, biometrics, passwords and single sign-on systems.
Pretty Good Privacy (PGP) is strong encryption software that enables you to protect your email and files by scrambling them so others cannot read them. It also allows you to digitally "sign" your messages in a way that allows others to verify that a message was actually sent by you. PGP is available in freeware and commercial versions all over the world.
PGP was first released in 1991 as a DOS program that earned a reputation for being difficult. In June 1997, PGP Inc. released PGP 5.x for Win95/NT. PGP 5.x included plugins for several popular email programs.
Information security involves protecting information and systems from unauthorized access, use, disclosure, disruption, modification, or destruction. It includes measures to ensure information availability, accuracy, authenticity, confidentiality and integrity. Network security aims to secure network components, connections and contents through authentication, encryption, firewalls and vulnerability patching in a continuous process of securing, monitoring, testing and improving security. Key related terms include assets, threats, vulnerabilities, risks, attacks, and countermeasures.
The document discusses developing a system for smart cloud security from single to multi-clouds. It outlines the introduction, literature survey, existing systems, problem definition, software architecture, requirements, UML diagrams, SDLC process, and conclusions. The problem is ensuring security and availability when data is stored and processed across single or multiple cloud systems. The goal is to develop a system that provides features like availability even during cloud failures, ability to handle multiple requests, and data security across single or multi-cloud environments.
This document proposes a system for secure and dependable storage in cloud computing. It introduces key challenges with cloud data security and proposes a distributed storage solution with lightweight communication and computation. The solution ensures strong data security, fast error detection, and supports dynamic operations on outsourced data. It uses algorithms like Byzantine fault tolerance and Reed-Solomon coding to detect errors and recover from failures. An overview of the system architecture, modules, use cases and technologies used is also provided.
This document provides an overview of email forensics techniques and tools used in network forensics investigations. It discusses the typical architecture of email systems and protocols like SMTP, POP, and IMAP. Key points covered include email headers, the information contained in Received headers, and how an email travels from sender to recipient through various mail servers. Spoofing emails is also briefly explained. The document aims to introduce investigators to analyzing email evidence at different layers of the network and tools needed for forensic analysis of email messages and server logs.
Encryption, steganography, data hiding, artifact wiping, and trail obfuscation are anti-forensic techniques used to hide digital evidence and make forensic investigations difficult. These techniques aim to conceal criminal activity by hiding data in places that are hard to find and modify file metadata and attributes to cast doubt on evidence. While some argue these methods help improve forensic procedures, they are generally considered malicious since they are designed to cover up illegal acts and prevent authorities from proving criminal cases.
This document discusses types of attacks on computer and network security. It defines passive and active attacks. Passive attacks monitor systems without interaction and include interception and traffic analysis attacks. Interception involves unauthorized access to messages. Traffic analysis examines communication patterns. Active attacks make unauthorized changes and include masquerade, interruption, fabrication, session replay, modification, and denial of service attacks. Masquerade involves assuming another user's identity. Interruption obstructs communication. Fabrication inserts fake messages. Session replay steals login information. Modification alters packet addresses or data. Denial of service deprives access by overwhelming the target.
This document discusses privacy protection issues in cloud computing. It begins by defining cloud computing and privacy protection. The main privacy issues in cloud computing are lack of physical control over data, difficulty tracking and protecting all copies of data, and legal problems due to varying privacy laws across regions. The document proposes using a privacy manager software to help users obfuscate sensitive metadata attributes before sharing data in the cloud. This allows users to set preferences and personae to control how their personal data is handled and used by cloud services.
Visual cryptography is a cryptographic technique that allows visual information like images and text to be encrypted in a way that decryption does not require a computer and is instead a mechanical operation performed by the human visual system. It was pioneered in 1994 by Moni Naor and Adi Shamir. The technique works by breaking an image into shares such that individual shares reveal no information about the original image but combining the shares allows the image to be revealed. For example, in a 2 out of 2 visual cryptography scheme each pixel is broken into 4 subpixels distributed randomly across 2 shares such that stacking the shares recovers the original pixel value. Visual cryptography finds applications in secure identification and communication.
Design of security architecture in Information Technologytrainersenthil14
This document discusses the key components of an example security architecture, including spheres of security with three layers of information protection, three levels of control (managerial, operational, and technical), defense in depth through layered security policies, training, technology, and redundancy, and a security perimeter defined by firewalls, a DMZ, proxy servers, and an intrusion detection and prevention system to protect internal systems from outside attacks. The presentation was given by K. Senthil Kumar, an assistant professor at Sri Eshwar College of Engineering.
The document discusses the components of an information security blueprint, including policies, standards, practices, and a security education program. It describes developing an enterprise security policy and issue-specific policies. The blueprint provides a plan for security controls, technologies, and training to ensure the organization's information is protected. It is the basis for designing and implementing all aspects of the security program.
This document provides an overview of steganography. It discusses how steganography hides messages within carriers so that the message is concealed. The document then discusses the history of steganography dating back to ancient Greece. It also discusses modern uses of steganography during the Cold War and by terrorist groups. The document outlines the objectives of the study which are to provide security during message transmission. It then discusses steganography techniques like the LSB algorithm and provides snapshots of its implementation. Finally, it discusses the results of using LSB steganography and concludes with possibilities for further enhancement.
Cryptographic Hash Functions, their applications, Simple hash functions, its requirements and security, Hash functions based on Cipher Block Chaining, Secure Hash Algorithm (SHA)
This document provides an overview of securing the storage infrastructure. It describes a framework for storage security that focuses on accountability, confidentiality, integrity, and availability. It also discusses the risk triad of threats, assets, and vulnerabilities. Specific security domains for storage are identified as application access, management access, and backup/recovery/archive. The chapter focuses on analyzing each domain to identify vulnerabilities and appropriate security controls.
This document discusses steganography, which is hiding messages within seemingly harmless carriers or covers so that no one apart from the intended recipient knows a message has been sent. It provides examples of steganography in text, images, and audio, as well as methods used for each. These include techniques like least significant bit insertion and temporal sampling rates. The document also covers steganalysis, which aims to detect hidden communications by analyzing changes in the statistical properties of covers.
security
,
system
,
introduction
,
threats to computer system
,
computer
,
security
,
types of software
,
system software
,
bios
,
need of an operating system
,
major functions of operating system
,
types of operating system
,
language
,
processor
,
application software
,
thank you
This document discusses IP security (IPSec) protocols. IPSec is used to secure IP communications by authenticating and encrypting IP packets. It provides data integrity, authentication, and confidentiality. IPSec includes protocols like Authentication Header (AH) and Encapsulating Security Payload (ESP) to provide security services like data integrity, data authentication, and confidentiality. It also uses the Internet Key Exchange (IKE) for automated key management and Security Associations (SAs) to identify security parameters for authenticated secure communication.
This document discusses password-based cryptography and common attacks on passwords. It introduces password-based authentication techniques that use hashing, salting, and iteration counts to strengthen passwords. Key derivation functions are used to generate cryptographic keys from passwords. Common countermeasures against online and offline dictionary attacks are also presented, such as delayed responses, account locking, pricing via processing time, and public key cryptography.
This document discusses security issues related to moving from single cloud to multi-cloud environments. It first provides background on the increased use of cloud computing and the privacy and security concerns organizations have in using single cloud providers. It then discusses the trend toward multi-cloud/inter-cloud environments to address issues like availability and potential insider threats. The document examines research on security issues in single and multi-cloud environments and outlines the objective to automatically block attackers and securely compute data across clouds.
Information Security Principles - Access Controlidingolay
The document discusses various concepts related to access controls and authentication methods in information security. It covers identification, authentication, authorization, accountability and different authentication factors like something you know, something you have, something you are. It also discusses access control models, biometrics, passwords and single sign-on systems.
Pretty Good Privacy (PGP) is strong encryption software that enables you to protect your email and files by scrambling them so others cannot read them. It also allows you to digitally "sign" your messages in a way that allows others to verify that a message was actually sent by you. PGP is available in freeware and commercial versions all over the world.
PGP was first released in 1991 as a DOS program that earned a reputation for being difficult. In June 1997, PGP Inc. released PGP 5.x for Win95/NT. PGP 5.x included plugins for several popular email programs.
Information security involves protecting information and systems from unauthorized access, use, disclosure, disruption, modification, or destruction. It includes measures to ensure information availability, accuracy, authenticity, confidentiality and integrity. Network security aims to secure network components, connections and contents through authentication, encryption, firewalls and vulnerability patching in a continuous process of securing, monitoring, testing and improving security. Key related terms include assets, threats, vulnerabilities, risks, attacks, and countermeasures.
The document discusses developing a system for smart cloud security from single to multi-clouds. It outlines the introduction, literature survey, existing systems, problem definition, software architecture, requirements, UML diagrams, SDLC process, and conclusions. The problem is ensuring security and availability when data is stored and processed across single or multiple cloud systems. The goal is to develop a system that provides features like availability even during cloud failures, ability to handle multiple requests, and data security across single or multi-cloud environments.
This document proposes a system for secure and dependable storage in cloud computing. It introduces key challenges with cloud data security and proposes a distributed storage solution with lightweight communication and computation. The solution ensures strong data security, fast error detection, and supports dynamic operations on outsourced data. It uses algorithms like Byzantine fault tolerance and Reed-Solomon coding to detect errors and recover from failures. An overview of the system architecture, modules, use cases and technologies used is also provided.
The document discusses Karnaugh maps, which are a pictorial representation of variables used to identify Boolean expressions or truth tables. It avoids ambiguity in logic design. The key points covered include:
- K-maps can be used for 2, 3, 4, or 5 variables
- Rules for grouping terms include mapping horizontally/vertically but not diagonally and overlapping is allowed once
- Don't care conditions represented by 'x' can be used but cannot form their own group
- K-maps are used to simplify Boolean expressions and design combinational and sequential logic circuits.
This document discusses providing accountability and access control for data shared in the cloud. It proposes a system where data owners can store encrypted data on a cloud service provider (CSP) along with access privileges for authorized clients. Clients must get permission from the data owner to retrieve encrypted data files from the CSP. The CSP generates log files of client access that are sent to the data owner for auditing purposes. The system uses algorithms like MD5, PBE and RSA for encryption, access control and integrity verification to securely share data while maintaining the data owner's control.
Ensuring Distributed Accountability in the CloudSuraj Mehta
This document outlines a project to ensure distributed accountability for data sharing in the cloud. It discusses the existing centralized system and outlines the proposed decentralized system with distributed accountability and automatic logging. The document includes sections on future scope, product features like JAR creation and data policies, an overview, security measures for copying and man-in-the-middle attacks, and technical specifications. It concludes that the goal of distributed accountability based on user privilege levels was achieved.
DYNAMIC AND PUBLIC AUDITING WITH FAIR ARBITRATION FOR CLOUD DATANexgen Technology
TO GET THIS PROJECT COMPLETE SOURCE ON SUPPORT WITH EXECUTION PLEASE CALL BELOW CONTACT DETAILS
MOBILE: 9791938249, 0413-2211159, WEB: WWW.NEXGENPROJECT.COM,WWW.FINALYEAR-IEEEPROJECTS.COM, EMAIL:Praveen@nexgenproject.com
NEXGEN TECHNOLOGY provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. NEXGEN TECHNOLOGY help it customers optimally use their resources.
Cloud computing-security-from-single-to-multiple-140211071429-phpapp01Shivananda Rai
This document discusses moving from single cloud computing to multi-cloud computing for improved security. It introduces cloud computing and describes deployment models, delivery models, and the difference between single and multi-cloud. The existing system of single clouds poses risks like service failure and malicious insiders. The proposed multi-cloud system improves data integrity, availability, and reduces intrusions by utilizing multiple cloud providers. Key implementations discussed are ensuring data integrity during transfers, preventing intrusions by hackers, and increasing availability through backups on multiple providers. The conclusion supports multi-cloud for better security and future work aims to develop a framework using multi-cloud and secret sharing to further reduce security risks.
Ensuring Distributed Accountability for Data Sharing in the CloudSwapnil Salunke
The document proposes a decentralized technique called the CAI framework to automatically log any access to data stored in the cloud. This framework uses Java archive (JAR) files to log data access and provide an auditing mechanism. It includes algorithms for identity-based encryption and authentication as well as push and pull modes for generating log records. The system architecture involves multiple server systems running software like Tomcat and MySQL to provide the cloud logging functionality.
ENABLING CLOUD STORAGE AUDITING WITH VERIFIABLE OUTSOURCING OF KEY UPDATESNexgen Technology
TO GET THIS PROJECT COMPLETE SOURCE ON SUPPORT WITH EXECUTION PLEASE CALL BELOW CONTACT DETAILS
MOBILE: 9791938249, 0413-2211159, WEB: WWW.NEXGENPROJECT.COM,WWW.FINALYEAR-IEEEPROJECTS.COM, EMAIL:Praveen@nexgenproject.com
NEXGEN TECHNOLOGY provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. NEXGEN TECHNOLOGY help it customers optimally use their resources.
This document discusses preserving data integrity in cloud computing through third party auditing. It introduces an effective third party auditor that can perform multiple auditing tasks simultaneously using the technique of bilinear aggregate signature. This reduces computation costs and storage overhead for integrity verification. The system supports dynamic data operations through techniques like fragment structure, random sampling and an index-hash table. It also allows efficient scheduling of audit activities in an audit period and assigns each third party auditor to audit a batch of files to save time. The system provides advantages like improved performance and reduced extra storage requirements.
Cloud Computing Security From Single To MulticloudSandip Karale
This document presents a project on improving cloud computing security from single to multi-cloud. It discusses the issues with single cloud providers in terms of availability and security risks. The proposed system aims to address these issues by using a multi-cloud model called DepSky that distributes data across multiple cloud providers. DepSky uses Shamir's secret sharing algorithm and abstracts data, storage, and computation across different cloud layers to improve availability, prevent data loss and corruption, and enhance privacy.
This document summarizes a research paper that proposes a privacy-preserving public auditing scheme for regenerating-code-based cloud storage. Existing methods only allow private auditing by the data owner, but the proposed system utilizes a third-party auditor and semi-trusted proxy to check data integrity and repair failures on behalf of the data owner. This allows public auditing while maintaining security and reducing the online burden for data owners. The system takes advantage of the properties of regenerating codes to efficiently compute authenticators.
Privacy Preserving Public Auditing for Data Storage Security in Cloud Girish Chandra
This document outlines the stages of a proposed privacy-preserving public auditing system for secure cloud storage. It introduces the need for such a system by describing challenges with cloud data integrity and existing solutions. The proposed system would allow a third party auditor to efficiently audit cloud data storage without accessing the actual data files, while preserving user data privacy. It would utilize public key cryptography and random masking techniques. The document claims this system would meet the goals of supporting privacy-preserving audits and handling multiple concurrent audit tasks through the use of techniques like bilinear aggregate signatures.
Identity based proxy-oriented data uploading and remote data integrity checki...Finalyearprojects Toall
The document discusses an identity-based proxy-oriented data uploading and remote data integrity checking model called IDPUIC. It proposes allowing clients to delegate proxies to upload and process data when clients cannot directly access public cloud servers. It also addresses remote data integrity checking, which allows clients to check if their outsourced data remains intact without downloading the whole data. The document then provides a formal definition, system model, and security model for IDPUIC before describing an efficient and flexible IDPUIC protocol based on bilinear pairings that is provably secure based on the computational Diffie-Hellman problem.
key aggregate cryptosystem for scalable data sharing in cloudSravan Narra
The document proposes a new key-aggregate cryptosystem (KAC) for secure and efficient data sharing in cloud storage. KAC allows encrypting data under a public key and identifier, and extracting an aggregate secret key from a master secret key. The aggregate key is compact yet provides decryption power for any subset of ciphertexts. This allows flexible delegation of decryption rights by sending a constant-sized aggregate key for sharing encrypted data on cloud storage. Formal security analysis is provided for the cryptosystem in the standard model.
This document presents an agenda for discussing identity-based secure distributed data storage schemes. The agenda includes sections on an abstract, introduction, existing systems, objectives, proposed systems, literature survey, system requirements, system design including data flow diagrams and class diagrams, testing, results and performance evaluation, and conclusions. The introduction discusses cloud computing services models. The existing systems section discusses database-as-a-service and its disadvantages. The proposed systems would provide two identity-based secure distributed data storage schemes with properties like file-based access control and protection against collusion attacks.
Privacy preserving public auditing for regenerating-code-based cloud storageparry prabhu
This document proposes a public auditing scheme for cloud storage using regenerating codes to provide fault tolerance. It introduces a proxy that is authorized to regenerate authenticators in the absence of data owners, solving the regeneration problem. The scheme uses a novel public verifiable authenticator generated by keys that allows regeneration using partial keys, removing the need for data owners to stay online. It also randomizes encoding coefficients with a pseudorandom function to preserve data privacy.
This document presents two identity-based secure distributed data storage schemes. The first scheme is secure against chosen plaintext attacks, while the second is secure against chosen ciphertext attacks. Both schemes allow a file owner to independently set access permissions for receivers. When a receiver makes a query, they can only access one file rather than all files from the owner. The schemes also protect against collusion attacks.
Secure Authorised De-duplication using Convergent Encryption TechniqueEswar Publications
Cloud computing means retrieve and storing information and programs over the Internet instead of your computer's hard drive. To protect confidentiality of the perceptive data while supporting de-duplication data is encrypted by the projected convergent encryption method before out sourcing. It makes the first attempt to properly address the problem of authorized data deduplication. We also present some new deduplication
constructions supporting authorized duplicate in cloud using symmetric algorithm. Data deduplication is one of the techniques which used to solve the repetition of data. The deduplication techniques are commonly used in the cloud server for reducing the space of the server. To prevent the unauthorized use of data accessing and generate duplicate data on cloud the encryption technique to encrypt the data before stored on cloud server.
The Indo-American Journal of Agricultural and Veterinary Sciences is an online international journal published quarterly. It is a peer-reviewed journal that focuses on disseminating high-quality original research work, reviews, and short communications of the publishable paper.
IRJET- Secure Scheme For Cloud-Based Multimedia Content StorageIRJET Journal
This document proposes a secure scheme for cloud-based multimedia content storage. It has two novel components: (1) a method to create signatures for 3D videos that captures depth signals efficiently, and (2) a distributed matching engine for multimedia objects that achieves high scalability. The system was implemented and deployed on Amazon and private clouds. Experiments on over 11,000 3D videos and 1 million images showed the system accurately detects over 98% of copies, outperforming YouTube's protection system which fails to detect most 3D video copies. The system provides cost-efficient, scalable multimedia content protection leveraging cloud infrastructure.
Extensive Security and Performance Analysis Shows the Proposed Schemes Are Pr...IJERA Editor
In this paper, we utilize the public key based homomorphism authenticator and uniquely integrate it with random mask technique to achieve a privacy-preserving public auditing system for cloud data storage security while keeping all above requirements in mind. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient. We also show how to extent our main scheme to support batch auditing for TPA upon delegations from multi-users.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
This document describes the development of an employee management system. It discusses:
1) The programming tools used - Microsoft Access for the database and C# with .NET Framework for the application. Access allows constructing relational databases while C# provides an object-oriented interface.
2) The database design, which includes 6 tables - one main employee table and 5 child tables for additional employee details like work history, time records, and contact information. The tables are related through primary and foreign keys.
3) The development process, which first analyzed user needs, designed the database structure, then constructed the graphical user interface in the application to interact with the database according to its structure.
The document discusses the objectives, feasibility study, and implementation specifications for an Income Tax Department Management System project. The objectives are to overcome paper-based problems and easily manage records of PAN card holders and employees. A feasibility study assesses the technical, operational, and economic feasibility of the proposed system. The implementation will use ASP.NET on Windows with a SQL Server database. Hardware requirements include a Pentium PC with 512MB RAM and 80GB hard drive.
Project on multiplex ticket bookingn system globsyn2014Md Imran
This document appears to be a project report for a movie ticket booking system developed using ASP.Net. It includes sections like acknowledgements, objectives, feasibility analysis, system requirements, database design, tables used, data flow diagrams, screenshots of the system, code snippets and references. The system allows users to book movie tickets, and has functionality for admins to add movies, theaters and manage the system. Group members who worked on the project are also listed.
Privacy Preserving Public Auditing and Data Integrity for Secure Cloud Storag...INFOGAIN PUBLICATION
Using cloud services, anyone can remotely store their data and can have the on-demand high quality applications and services from a shared pool of computing resources, without the burden of local data storage and maintenance. Cloud is a commonplace for storing data as well as sharing of that data. However, preserving the privacy and maintaining integrity of data during public auditing remains to be an open challenge. In this paper, we introducing a third party auditor (TPA), which will keep track of all the files along with their integrity. The task of TPA is to verify the data, so that the user will be worry-free. Verification of data is done on the aggregate authenticators sent by the user and Cloud Service Provider (CSP). For this, we propose a secure cloud storage system which supports privacy-preserving public auditing and blockless data verification over the cloud
IRJET- Enabling Identity-Based Integrity Auditing and Data Sharing with Sensi...IRJET Journal
This document summarizes a research paper that proposes a method for enabling identity-based integrity auditing and data sharing with sensitive information hiding for secure cloud storage. The method allows users to remotely store and share data in the cloud while ensuring data integrity and hiding sensitive information. It involves generating QR codes linked to file identifiers for data sharing and using signatures during integrity auditing to verify files stored in the cloud. The proposed method aims to address limitations in existing cloud storage systems regarding sensitive data sharing and remote integrity auditing.
IRJET- Privacy Preserving and Proficient Identity Search Techniques for C...IRJET Journal
This document presents a privacy preserving and efficient identity search technique for cloud data security. It proposes a scheme using visual-encryption techniques to overcome issues with untrusted cloud storage. The existing methodology uses data signing algorithms but has limitations as the private key depends on the security of one computer. The proposed system uses visual-cryptographic encryption, which scrambles data using an algorithm requiring a key to decrypt. It involves users uploading encrypted files, administrators approving requests to view files through live video verification, and decryption using the appropriate key. The scheme aims to securely store large volumes of data while allowing identity verification for file access on the cloud.
IRJET- Analysis of using Software Defined and Service Coherence ApproachIRJET Journal
This document discusses using a software-defined approach and service coherence to analyze the performance of querying electronic health records stored in a cloud database. It proposes creating a query processing service that can store, access, and manipulate unstructured electronic medical record data using a file-based storage system. The performance of this system will be evaluated by measuring query response times and comparing to a traditional database. A software-defined controller is used to manage and provision resources from a pooled infrastructure to applications. Electronic health records are stored as objects with attributes rather than files to improve access and allow metadata tagging for easier retrieval.
The document summarizes two recent studies on access control. It discusses the authors' contributions in each study, their motivations, and potential additional areas of study. The first study introduced metrics to evaluate access control rule sets and provide a scientific method for comparing rule sets. The second study surveyed access control in fog computing, highlighting security challenges and providing requirements and taxonomies for access control models. It suggests attribute-based encryption as an area for further fog computing access control research.
Development of a Suitable Load Balancing Strategy In Case Of a Cloud Computi...IJMER
Cloud computing is an attracting technology in the field of computer science. In
Gartner’s report, it says that the cloud will bring changes to the IT industry. The cloud is changing
our life by providing users with new types of services. Users get service from a cloud without paying
attention to the details. NIST gave a definition of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. More
and more people pay attention to cloud computing. Cloud computing is efficient and scalable but
maintaining the stability of processing so many jobs in the cloud computing environment is a very
complex problem with load balancing receiving much attention for researchers. Since the job
arrival pattern is not predictable and the capacities of each node in the cloud differ, for load
balancing problem, workload control is crucial to improve system performance and maintain
stability. Load balancing schemes depending on whether the system dynamics are important can be
either static or dynamic. Static schemes do not use the system information and are less complex
while dynamic schemes will bring additional costs for the system but can change as the system
status changes. A dynamic scheme is used here for its flexibility. The model has a main controller
and balancers to gather and analyze the information. Thus, the dynamic control has little influence
on the other working nodes. The system status then provides a basis for choosing the right load
balancing strategy. The load balancing model given in this research article is aimed at the public
cloud which has numerous nodes with distributed computing resources in many different
geographic locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing. The cloud
has a main controller that chooses the suitable partitions for arriving jobs while the balancer for
each cloud partition chooses the best load balancing strategy.
IRJET-Auditing and Resisting Key Exposure on Cloud StorageIRJET Journal
1. The document discusses auditing and resisting key exposure in cloud storage. It proposes a new framework called an auditing protocol with key-exposure resilience that allows integrity of stored data to still be verified even if the client's current secret key is exposed.
2. It formalizes the definition and security model for such a protocol and proposes an efficient practical construction. The security proof and asymptotic performance analysis show the proposed protocol is secure and efficient.
3. Key techniques used include periodic key updates, homomorphic linear authenticators, and a novel authenticator construction to boost forward security and provide proof of retrievability with the current design.
Authenticated and unrestricted auditing of big data space on cloud through v...IJMER
Cloud unlocks a different era in Information technology where it has the capability of providing the customers with a variety of scalable and flexible services. Cloud provides these services through a prepaid system, which helps the customers cut down on large investments on IT hardware
and other infrastructure. Also according to the Cloud viewpoint, customers don’t have control on their
respective data. Hence security of data is a big issue of using a Cloud service. Present work shows that
the data auditing can be done by any third party agent who is trusted and known as auditor. The auditor can verify the integrity of the data without having the ownership of the actual data. There are many disadvantages for the above approach. One of them is the absence of a required verification procedure among the auditor and service provider which means any person can ask for the verification of the file which puts this auditing at certain risk. Also in the existing scheme the data updates can be
done only for coarse granular updates i.e. blocks with the uneven size. And hence resulting in repeated communication and updating of auditor for a whole file block causing higher communication costs and
requires more storage space. In this paper, the emphasis is to give a proper breakdown for types of
fixed granular updates and put forward a design that will be capable to maintain authenticated and unrestricted auditing. Based on this system, there is also an approach for remarkably decreasing the communication costs for auditing little updates
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Public Integrity Auditing for Shared Dynamic Cloud Data with Group User Revoc...1crore projects
This document describes a public integrity auditing scheme for shared dynamic cloud data with group user revocation. It discusses the problem of collusion attacks between cloud servers and revoked group users in existing schemes. The proposed scheme uses vector commitment and verifier-local revocation group signatures to enable public checking, efficient user revocation, and prevent collusion attacks. It aims to achieve security, correctness, efficiency, countability, and traceability. The scheme relies on strong Diffie-Hellman and decision linear assumptions.
Public integrity auditing for shared dynamic cloud data with group user revoc...Pvrtechnologies Nellore
This document describes a public integrity auditing scheme for shared dynamic cloud data that supports secure group user revocation. It identifies limitations in existing schemes, such as lack of consideration for data secrecy during updates and potential collusion attacks during revocation. The proposed scheme uses vector commitment, asymmetric group key agreement, and group signatures to enable encrypted data updates among group users and efficient yet secure user revocation. It aims to provide public auditing of data integrity, as well as properties like traceability and accountability. The security and performance of the scheme are analyzed and shown to improve upon relevant existing works.
The document discusses major design issues in cloud computing operating systems and techniques to mitigate them. It outlines issues like providing sufficient APIs, security, trust, confidentiality and privacy. To address these, a cloud OS needs to design abstract interfaces following open standards for interoperability. It also needs mechanisms like trusted third parties to establish trust dynamically between systems. The OS must allow for multitenancy while preventing confidentiality breaches through techniques like limiting residual data.
2. REPORT
1. OBJECTIVE AND SCOPE
1.1Objective:
The main objective of this report is to propose a novel highly decentralized information
accountability framework to keep track of the actual usage of the user’s data in the cloud.
To make a basic understanding of the existing cloud service model and to make an indepth
analysis of the working under cloud storage.
1.2 Scope
The main scope of this report is providing solutions for the security of the data. When we want to
download the uploaded file from cloud, a key will be needed and that key is provided by the
cloud server and is present in the form of encrypted data.
3. 2. SYSTEM ANALYSIS
2.1 Existing System:
The importance of ensuring the remote data integrity has been highlighted by the following
research works under different system and security models. These techniques, while can be
useful to ensure the storage correctness without having users possessing local data, are all
focusing on single server scenario. They may be useful for quality-of-service testing, but does
not guarantee the data availability in case of server failures. Although directly applying these
techniques to distributed storage (multiple servers) could be straightforward, the resulted storage
verification overhead would be linear to the number of servers. As a complementary approach,
researchers have also proposed distributed protocols for ensuring storage correctness across
multiple servers or peers. However, while providing efficient cross server storage verification
and data availability insurance, these schemes are all focusing on static or archival data. As a
result, their capabilities of handling dynamic data remains unclear, which inevitably limits their
full applicability in cloud storage scenarios.
4. 2.2 MODULES:
2.2.1 Data integrity:
The cloud infrastructures are much more powerful and reliable than personal computing devices,
broad range of both internal and external threats for data integrity outsourcing data into the cloud
is economically attractive for the cost and complexity of long-term large-scale data storage, its
lacking of offering strong assurance of data integrity and availability may impede its wide
adoption by both enterprise and individual cloud users. The assurances of cloud data integrity
and availability and enforce the quality of cloud storage service, efficient methods that enable
on-demand data correctness verification on behalf of cloud users have to be designed.
2.2.2 Data security for cloud:
The problem of data security in cloud data storage, which is essentially a distributed
storage system. To achieve the assurances of cloud data integrity and availability and enforce the
quality of dependable cloud storage service for users, an effective and flexible distributed
scheme with explicit dynamic data support.
2.2.3 Distributed storage:
To distribute storage (multiple servers) could be straightforward, the resulted storage
verification overhead would be linear to the number of servers. As a complementary approach,
researchers have also proposed distributed protocols for ensuring storage correctness across
multiple servers or peers. An effective and flexible distributed storage verification scheme with
explicit dynamic data support to ensure the correctness and availability of users’ data in the
cloud.
5. 2.3 Feasibility study:
The feasibility of the project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system analysis the feasibility
study of the proposed system is to be carried out. This is to ensure that the proposed system is
not a burden to the company. For feasibility analysis, some understanding of the major
requirements for the system is essential. The assessment is based on an outline design of system
requirements in terms of Input, Processes, Output, Fields, Programs, and Procedures.
This can be quantified in terms of volumes of data, trends, frequency of
updating, etc. in order to estimate whether the new system will perform adequately or not.
Technological feasibility is carried out to determine whether the company has the capability, in
terms of software, hardware, personnel and expertise, to handle the completion of the project.
When writing a feasibility report the following should be taken to consideration:
A brief description of the business to assess more possible factor/s which could affect the
study.
The part of the business being examined
The human and economic factor
The possible solutions to the problems
At this level, the concern is whether the proposal is both technically and legally feasible. Three
key considerations involved in the feasibility analysis are
1. TECHNICAL FEASIBILITY
2. SOCIAL FEASIBILITY
3. ECONOMICAL FEASIBILITY
6. 2.3.1 Technical feasibility:
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the available
technical resources. This will lead to high demands on the available technical resources. This
will lead to high demands being placed on the Outsourcer. The developed system must have a
modest requirement, as only minimal or null changes are required for implementing this system.
2.3.2 Social feasibility:
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system and
to make him familiar with it. His level of confidence must be raised so that he is also able to
make some constructive criticism, which is welcomed, as he is the final user of the system.
2.3.3 Economical feasibility:
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased. Economic analysis is the most
frequently used method for evaluating the effectiveness of a new system. More commonly
known as cost/benefit analysis, the procedure is to determine the benefits and savings that are
expected from a candidate system and compare them with costs. If benefits outweigh costs, then
the decision is made to design and implement the system. An entrepreneur must accurately
weigh the cost versus benefits before taking an action.
7. 3.SYSTEM DESIGN
3.1 Applicable Diagram:
3.1.1 Data Flow Diagram:
A two-dimensional diagram that explains how data is processed and transferred in a system.
The graphical depiction identifies each source of data and how it interacts with other
data sources to reach a common output.
3.1.2 E-R Diagram:
User/cloud Server
D1 DATA STORE
2 User
Browse
Upload
Download
pload
Download
3 Cloud Server
Key Generate
D3 DATA STORE
D2 DATA STORE
1 Authentication
Register
Login
8. In software engineering, an entity-relationship model (ERM) is an abstract and conceptual
representation of data. Entity-relationship modeling is a database modeling method, used to
produce a type of conceptual schema or semantic data model of a system, often a relational
database, and its requirements in a top-down fashion. Diagrams created by this process are
called entity-relationship diagrams, ER diagrams, or ERDs.
An entity-relationship (ER) diagram is a specialized graphic that illustrates the relationships
between entities in a database. ER diagrams often use symbols to represent three different types
of information. Boxes are commonly used to represent entities. Diamonds are normally used to
represent relationships and ovals are used to represent attributes.
An entity-relationship (ER) diagram is a specialized graphic that illustrates
the relationships between entities in a database. ER diagrams often use symbols to represent
three different types of information. Boxes are commonly used to represent entities. Diamonds
are normally used to represent relationships and ovals are used to represent attributes.
9. 3.1.3 System Flow Diagram:
Authentication
Users
Upload the File
Download the File
Cloud server
Key Generate Throw answers in cloud
provider
Get key from
cloud server
View the Files
Login Registration
10. The new users have to register the application and then login. This module helps to recognize
the authorized user of the application as cloud server. Registration module helps to provide
authentication to new user. User verification is needed for every system to keep security and for
any other misuse. Each authorized user will have a user-id /name and a password for login. The
users upload the file after the login and for downloading the file they need a key and that key is
generated from the cloud server. After giving the key, user can download the file.
3.1.4 Activity Diagram:
Login
Cloud Server
Cloud providerUpload the File Key Generate
Users
11. Activity diagram is a loosely defined diagram to show workflows of stepwise activities and
actions, with support for choice, iteration and concurrency. UML, activity diagrams can be used
to describe the business and operational step-by-step workflows of components in a system.
UML activity diagrams could potentially model the internal logic of a complex operation. In
many ways UML activity diagrams are the object-oriented equivalent of flow charts and data
flow diagrams (DFDs) from structural development.
Login
Browse
Upload File
Key Generate for
Downloading file
View the File
12. 3.1.5 Use-Case Diagram:
A use case diagram is a type of behavioral diagram created from a Use-case analysis. The
purpose of use case is to present overview of the functionality provided by the system in terms of
actors, their goals and any dependencies between those use cases.
User
register
login
file upload to cloud server
key
response to user
cloud server
13. 3.1.6 Sequence Diagram:
A sequence diagram in UML is a kind of interaction diagram that shows how processes
operate with one another and in what order.
It is a construct of a message sequence chart. Sequence diagrams are sometimes called
Event-trace diagrams, event scenarios, and timing diagrams.
The below diagram shows the sequence flow of the Anonymous Database Management
System.
USER REGISTER LOGIN
UPLOAD FILE CLOUD
STORAGE
DISTRIBUTED
STOREAGE
DATABASEADMIN
user register
user login
user upload file
store the file on cloud
distributed storeage
inforamtion store
ADMIN LOGIN
ADMIN DATA STORE
ADD TO DATABASE
14. 3.1.7 Collaboration Diagram:
A collaboration diagram shows the objects and relationships involved in an interaction, and the
sequence of messages exchanged among the objects during the interaction.
The collaboration diagram can be a decomposition of a class, class diagram, or part of a
class diagram. It can be the decomposition of a use case, use case diagram, or part of a use case
diagram.
The collaboration diagram shows messages being sent between classes and object
(instances). A diagram is created for each system operation that relates to the current
development cycle (iteration).
USER
REGISTE
R
LOGIN
DISTRIBUTED
STOREAGE
UPLOAD
FILE
CLOUD
STORAGE
DATABAS
E
ADMIN
1: user register
2: user login
3: user upload file
4: store the file on cloud
5: distributed storeage
6: ADMIN LOGIN
7: inforamtion store
8: ADMIN DATA STORE
9: ADD TO DATABASE
15. 3.2 TABLE DESCRIPTION:
3.2.1 User registration:
3.2.2 User login:
COLUMN NAME DATATYPE
Email Add Varchar(100)
Username Varchar(100)
Password Varchar(100)
Conform Password Varchar(100)
Address Varchar(100)
COLUMN NAME DATATYPE
E-mail id Varchar(100)
Username Varchar(100)
Password Varchar(100)
Conform Password Varchar(100)
Address Varchar(100)
Zip Code Varchar(100)
Mobile No Varchar(100)
16. Zip Code Varchar(100)
Mobile No Varchar(100)
image password Varchar(50)
3.2.3 FILES:
3.2.3 FILES UPLOAD:
COLUMN NAME DATATYPE
FileName Varchar(100)
File Path Varchar(100)
RKey Numeric(18,0)
Size Int
COLUMN NAME DATATYPE
ID int
Username Varchar(50)
Name Varchar(100)
Content type Varchar(50)
Size int
Data VarBinary(MAX)
17. 3.2.4 CLOUD REGISTRATION:
3.2.5 CLOUD SERVER:
COLUMN NAME DATATYPE
Userid Varchar(100)
Username Varchar(100)
Password Varchar(100)
Conform Password Varchar(100)
Address Varchar(100)
Zip Code Varchar(100)
Mobile No Varchar(100)
COLUMN NAME DATATYPE
Username Varchar(100)
File Path Varchar(100)
Key Generate Varchar(100)
IPad dress Varchar(100)
Date Varchar(100)
Up Key Varchar(100)
18. 4. SYSTEM TESTING AND IMPLEMENTATION
Testing is the one step in the Software Engineering process that could be viewed as
destructive rather than constructive. Software testing is a critical element of software quality
assurance and represents the ultimate reviews of specification, design and coding. Testing
is representing an interesting anomaly for the software.
Testing is vital to the success of the system. Errors can be injected at any stage during
development. System testing makes a logical assumption that if all the parts of the system are
correct, the goal will be successfully achieved. During testing, the program to be tested is
executed with set of data and the output of program for the test data is evaluated to determine if
the program is performing as expected. Testing cannot show the absence of defects, it can only
show that software defects are present.
The objectives of testing are
Testing is a process of executing a program with the intent of finding an error.
A good test care is one that has a high probability of finding an as yet undiscovered error.
A successful test is one that uncovers and as yet undiscovered error. The software
developed has been tested successfully using the following strategies and any errors that
are encountered are corrected and again the part of the program or the procedure or
function is put to testing until all the errors are removed.
The testing steps are:
Unit Testing
Module Testing
Integration Testing
19. Unit testing:
Unit testing focuses verification effort on the smallest unit of the software design. This
project compromises the set performed by an individual programmer prior to the integration of
the unit into a larger system. This testing is carried out during the coding itself. In this testing
step each module such as registration, login, etc going to be working satisfactorily as the
expected output from the module.
Module testing:
Since it is a real time project the modules in this project may collects inputs from another
module or any sub modules. Likewise they can forward their output as inputs to some modules
or sub modules. So a module testing is one of the important testing in system development cycle.
This testing is used in login module. The output form registration is used as input for login
module.
Integration testing:
In this project the data can be lost across an interface; one module can have adverse effort
on another, sub function when combined may not produce the desired function. Integration
testing is a systematic technique for constructing the program while at the same time conducting
test to uncover errors associated within the interface.
The objective is to take unit-tested module and built the program structure that has been
dictated by design. All modules are combined in this testing. The entire program is tested a
whole. Correction is difficult at this stage because the isolation of module. At the integration
testing, software is completely assembled as a package. Interfacing errors have been uncovered
and corrected and a final series of software test validation testing begins.
Validation testing can be defined in many ways, but a simple definition is that validation
succeeds when the software functions in manner that is reasonably expected by the customer.
Software validation is achieved through a series of black box tests that demonstrate conformity
with requirement. After validation test has been conducted, one of the two conditions exists. The
function or performance characteristics confirm to specification and are accepted.
20. A validation from specification is uncovered and a deficiency created. Deviation of errors
discovered at this step in this project is corrected prior to the completion of the project with the
help of the user by negotiating to establish a method for resolving deficiencies. Thus the
proposed system under consideration has been tested by using validation testing and found to be
working satisfactorily.
Advantage of Testing:
More effective on larger units of code than glass box testing.
Tester needs no knowledge of implementation, including specific programming
language.
Tester and programmer are independent of each other
Tests are done from a user’s of view.
Black Box Test will help to expose any ambiguities or inconsistencies in the
specification.
Disadvantages of Testing:
Only a small number of possible inputs can actually be tested.
To test every possible input stream would take nearly forever without clear and
concise specifications.
Test cases are hard to design.
There may be unnecessary repetition of test inputs, if the tester is not informed of
test cases the programmer has already tried may leave many program paths untested
cannot be directed toward specific segment of code which may be very complex most
testing related research has been directed toward glass box testing.
21. 5. FUTURE ENHANCEMENTS
In this report we proposed a global wait method for security proof to avoid hacking in cloud
infrastructure. The key generated by service provider regarding user queries will send to their
respective registered login id. In the existing ones the key will be displayed to the user and hence
there is a chance for hacking user details. To avoid such circumstances, the user must get their
details with their respective key generated by the service provider.