This document provides a technical project report on designing a database for an In-Flight Entertainment System (IFES). It includes details on functional dependencies, the logical and physical data models created, the relational schema generated in ER-Win, sample data loaded into the tables, and example queries. The goal of the project was to design a database to track movies and music accessed by passengers on flights and collect charging information, while also analyzing access patterns of different content.
Software Architecture Design Recovery through Run-Time Source Code Collaborat...theijes
Interoperability between enterprise legacy systems and contemporary information systems requires more efficient ways to analyze the architecture and design of legacy systems. However, the original source code collaboration design information is dispersed at the implementation level. The analysis of system design architecture therefore becomes a difficult task. In this paper, the authors present a novel approach to efficiently recover and analyze legacy system architecture and design through run-time source code collaboration pattern and role analysis. The extraction of code artifact collaborations and their roles is therefore an important support aspect in legacy software comprehension and architecture design recovery. The authors’ approach consists of two major parts, both of which are supported by their reverse engineering tools. The first part focuses on the dynamic analysis of target legacy systems and the automatic discovery of the legacy system’s architecture. The second part is concentrated on the recovery and study of legacy system design through collaboration pattern and role analysis. The study demonstrates that this novel approach is promising.
The internals of Spark SQL Joins, Dmytro PopovichSigma Software
This document discusses Spark SQL joins and the Spark query planning and execution process. It begins with two examples of joining two DataFrames, with the second example being 10x faster. It then covers how Spark SQL queries are optimized through logical and physical planning. Key stages include generating a logical plan from the SQL/DataFrame, analyzing and optimizing this plan, generating physical plans, and code generation. Join algorithms like broadcast hash join, shuffle hash join, and sort merge join are explained in terms of complexity, requirements, and shuffling. The performance difference in the examples is explained by the physical plans generated, with the faster example using a broadcast hash join to avoid shuffling.
This document describes a project to calibrate a camera using a calibration rig. Intrinsic and extrinsic camera parameters were calculated. Image and world coordinates of points on the calibration rig were collected. A projection matrix was calculated from the coordinates and used to determine the intrinsic parameters like focal length and extrinsic parameters like rotation and translation. The estimated image coordinates from the projection matrix were compared to measured coordinates to calculate errors, which improved when more points were used.
The document discusses Apache Arrow and DataFusion. DataFusion is a query execution framework written in Rust that uses Apache Arrow as its in-memory format. It allows for customizable query execution through extensible logical and execution plans. DataFusion provides SQL query support and a DataFrame API, and powers systems like FLOCK and ROAPI through its optimized execution engine.
This document provides details about a student project titled "Multifunctional Tools" created using Python. The project allows users to perform various mathematical and logical operations through a graphical user interface. It includes functions for calculations, string manipulation, ASCII conversions, checking vowels/consonants, palindromes, prime numbers and more. The project was created by the student to provide a single platform for different operations and help users with schoolwork. It makes use of Python modules and functions along with a Tkinter GUI.
DataFusion and Arrow_ Supercharge Your Data Analytical Tool with a Rusty Quer...Medcl1
DataFusion is a query execution framework written in Rust that uses Apache Arrow as its in-memory format. It can optimize and execute SQL queries and DataFrame operations in a highly performant manner. DataFusion is highly extensible and used as the backend for several open source analytics systems due to its customizability and native execution performance.
The document provides details about Kevin Bengtson's SQL portfolio, including several database projects and T-SQL queries projects with examples. It also outlines SQL server administrative tasks performed and an SSIS/SSRS project involving creating a MiniAdventureWorks database. The final section describes a BlockFlix database designed for a video rental store.
This document provides instructions for formatting a term paper. It includes sections for the title page, table of contents, introduction, proposed system description, system requirements, requirement analysis, system design, source code, testing, and future scope. It then provides a sample term paper formatted according to these instructions on the topic of a tic-tac-toe game. The paper includes an introduction describing the existing system and proposed improvements. It details the system requirements, design including flowcharts, and provides the source code for the tic-tac-toe game.
Software Architecture Design Recovery through Run-Time Source Code Collaborat...theijes
Interoperability between enterprise legacy systems and contemporary information systems requires more efficient ways to analyze the architecture and design of legacy systems. However, the original source code collaboration design information is dispersed at the implementation level. The analysis of system design architecture therefore becomes a difficult task. In this paper, the authors present a novel approach to efficiently recover and analyze legacy system architecture and design through run-time source code collaboration pattern and role analysis. The extraction of code artifact collaborations and their roles is therefore an important support aspect in legacy software comprehension and architecture design recovery. The authors’ approach consists of two major parts, both of which are supported by their reverse engineering tools. The first part focuses on the dynamic analysis of target legacy systems and the automatic discovery of the legacy system’s architecture. The second part is concentrated on the recovery and study of legacy system design through collaboration pattern and role analysis. The study demonstrates that this novel approach is promising.
The internals of Spark SQL Joins, Dmytro PopovichSigma Software
This document discusses Spark SQL joins and the Spark query planning and execution process. It begins with two examples of joining two DataFrames, with the second example being 10x faster. It then covers how Spark SQL queries are optimized through logical and physical planning. Key stages include generating a logical plan from the SQL/DataFrame, analyzing and optimizing this plan, generating physical plans, and code generation. Join algorithms like broadcast hash join, shuffle hash join, and sort merge join are explained in terms of complexity, requirements, and shuffling. The performance difference in the examples is explained by the physical plans generated, with the faster example using a broadcast hash join to avoid shuffling.
This document describes a project to calibrate a camera using a calibration rig. Intrinsic and extrinsic camera parameters were calculated. Image and world coordinates of points on the calibration rig were collected. A projection matrix was calculated from the coordinates and used to determine the intrinsic parameters like focal length and extrinsic parameters like rotation and translation. The estimated image coordinates from the projection matrix were compared to measured coordinates to calculate errors, which improved when more points were used.
The document discusses Apache Arrow and DataFusion. DataFusion is a query execution framework written in Rust that uses Apache Arrow as its in-memory format. It allows for customizable query execution through extensible logical and execution plans. DataFusion provides SQL query support and a DataFrame API, and powers systems like FLOCK and ROAPI through its optimized execution engine.
This document provides details about a student project titled "Multifunctional Tools" created using Python. The project allows users to perform various mathematical and logical operations through a graphical user interface. It includes functions for calculations, string manipulation, ASCII conversions, checking vowels/consonants, palindromes, prime numbers and more. The project was created by the student to provide a single platform for different operations and help users with schoolwork. It makes use of Python modules and functions along with a Tkinter GUI.
DataFusion and Arrow_ Supercharge Your Data Analytical Tool with a Rusty Quer...Medcl1
DataFusion is a query execution framework written in Rust that uses Apache Arrow as its in-memory format. It can optimize and execute SQL queries and DataFrame operations in a highly performant manner. DataFusion is highly extensible and used as the backend for several open source analytics systems due to its customizability and native execution performance.
The document provides details about Kevin Bengtson's SQL portfolio, including several database projects and T-SQL queries projects with examples. It also outlines SQL server administrative tasks performed and an SSIS/SSRS project involving creating a MiniAdventureWorks database. The final section describes a BlockFlix database designed for a video rental store.
This document provides instructions for formatting a term paper. It includes sections for the title page, table of contents, introduction, proposed system description, system requirements, requirement analysis, system design, source code, testing, and future scope. It then provides a sample term paper formatted according to these instructions on the topic of a tic-tac-toe game. The paper includes an introduction describing the existing system and proposed improvements. It details the system requirements, design including flowcharts, and provides the source code for the tic-tac-toe game.
This document provides the format and contents for a term paper on the game Tic Tac Toe. It includes sections for the title page, table of contents, introduction, proposed system description and requirements, requirement analysis, system design, source code, testing, and future scope. The introduction describes the existing manual system and benefits of the new software system. The proposed system section provides more details on the problem statement, functions, and system requirements. The requirement analysis section describes the system development life cycle. The system design section includes a flowchart illustrating the game play. The source code section lists the C code used to develop the software.
This document provides the format and structure for a term paper on the game Tic Tac Toe. It includes sections for the title page, table of contents, introduction, proposed system description and requirements, requirement analysis, system design, source code, testing, and future scope. It describes the problem statement, functions to be provided by the system, and system requirements. It also includes sample source code for the game and descriptions of the requirement analysis and system design processes. The overall summary provides an outline and overview of the contents and organization of the term paper project on developing a Tic Tac Toe game.
The document discusses techniques for evolutionary database development in an agile team. It recommends that the database administrator (DBA) work closely with other roles to iteratively refactor the database schema through small, frequent changes. It also emphasizes automated testing and deployment of database changes to safely evolve the database design over time.
The document summarizes a project management system developed for Zydus Cadila Health Care Ltd. Key features of the system include reflecting accurate project status, easy task dependency definition, customized security, and enhanced communication tools. The system was developed using ASP.NET, C#, VB.NET, and SQL Server. It includes modules for managing departments, employees, projects, tasks, targets, alerts and more. The database was designed with multiple tables to store related data and ensure referential integrity.
Cloud Computing for Business - The Road to IT-as-a-ServiceJames Urquhart
This document discusses the transition from infrastructure-centric IT operations to service-centric operations driven by cloud computing. It explains that cloud computing asks IT to deliver computing resources as a set of services rather than managing individual servers, networks, and storage. This new model of service operations is comprised of application operations, service operations, and infrastructure operations. Application operations focus on consuming services through tools like multi-cloud management. Service operations identify and deliver key services with supporting capabilities. Infrastructure operations provide the common foundation through APIs and automation.
Data Modeling, Normalization, and De-Normalization | PostgresOpen 2019 | Dimi...Citus Data
As a developer using PostgreSQL one of the most important tasks you have to deal with is modeling the database schema for your application. In order to achieve a solid design, it’s important to understand how the schema is then going to be used as well as the trade-offs it involves.
As Fred Brooks said: “Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.”
In this talk we're going to see practical normalisation examples and their benefits, and also review some anti-patterns and their typical PostgreSQL solutions, including Denormalization techniques thanks to advanced Data Types.
BI Team @ LinkedIn hosted a user group meeting for MicroStrategy customers in bay area. Presentation includes information about LinkedIn, concepts of metadata driven model for business dashboards, customizations using SDK, JSP and JQUERY.
The document discusses project management techniques including PERT and CPM. It explains that PERT and CPM are used to plan, schedule, and coordinate large projects by graphically displaying project activities, estimating project duration, identifying critical activities, and determining float. The framework involves defining activities and relationships, drawing network diagrams, estimating activity times, computing the critical path, and using the network to plan and control the project. Key terms and how to draw network diagrams are also covered.
This document discusses OLAP and MDX injection attacks. It provides an overview of OLAP and how MDX is used to query multidimensional data cubes. The document then explains how MDX injections can be used to expose sensitive data by manipulating MDX queries. Specific techniques are described, such as injecting into the WITH or SELECT clauses of an MDX query to conduct partial data retrieval or blind injections.
A talk given by Julian Hyde at DataCouncil SF on April 18, 2019
How do you organize your data so that your users get the right answers at the right time? That question is a pretty good definition of data engineering — but it is also describes the purpose of every DBMS (database management system). And it’s not a coincidence that these are so similar.
This talk looks at the patterns that reoccur throughout data management — such as caching, partitioning, sorting, and derived data sets. As the speaker is the author of Apache Calcite, we first look at these patterns through the lens of Relational Algebra and DBMS architecture. But then we apply these patterns to the modern data pipeline, ETL and analytics. As a case study, we look at how Looker’s “derived tables” blur the line between ETL and caching, and leverage the power of cloud databases.
The document provides an agenda and overview for a data warehousing training on SQL Server Reporting Services (SSRS). It discusses the architecture of SSRS, using the report designer and wizard, publishing reports, and building OLAP reports. It also assigns participants to create a data warehouse using SQL Server Integration Services from an AdventureWorks database and build BI reports in SSRS.
12 things Oracle DBAs must know about SQLSolarWinds
All DBAs and developers working with Oracle databases should have a solid knowledge of SQL, including what it is (and what it isn’t), how it works (and how it doesn’t) and how best to use it.
What you don't know about SQL could be having a negative impact on your database performance. In this dynamic presentation, you’ll learn 12 things you should know about SQL, including:
*Why SQL works the way it does
*How to work with SQL’s strengths and weaknesses
*Why how you write queries matters so much
*Why database design matters so much
*What statistics are so important
And much, much more.
The document discusses eBay's data warehouse (EDW) and metadata management applications. It provides a history of eBay and overview of the EDW, which started in 2000 and is now the largest Teradata installation in the world. It describes key applications including a data flow diagram tool, data rationalization process, and JobTrack tool for monitoring ETL jobs. These applications help optimize the EDW through automated metadata analysis and management.
BitLocker is drive encryption software included with Windows that encrypts the entire contents of the drive to protect against unauthorized access to data even if the drive is removed from the device. It stores the encryption key in the computer's Trusted Platform Module (TPM) chip or on an external USB drive for added security. BitLocker requires a Trusted Platform Module version 1.2 or higher, or the ability to store the recovery key on an external drive in order to encrypt the system drive.
This document contains the syllabus for the 7th semester Computer Technology diploma program. It includes details for the Network Administration and Services course. The course aims to provide knowledge and skills related to network administration, routing protocols, network security, and link redundancy. The theory section covers topics like OSI and TCP/IP models, subnetting, routing protocols like OSPF, EIGRP, RIP, VLANs, NAT, IPv6, and redundancy protocols. The practical section includes tasks to design and simulate networks implementing these protocols and concepts using tools like Packet Tracer.
System engineering involves determining operational requirements and modeling relationships between elements like hardware, software, and people to accomplish goals. It can focus on business processes or product development. The engineering process follows a hierarchy from overall objectives to domain specifications to element implementations. It is iterative to adapt to changing needs. Business process engineering derives data, application, and technology architectures, while product engineering defines architectures and infrastructure for software, hardware, data, and people components.
Basics of Microsoft Business Intelligence and Data Integration TechniquesValmik Potbhare
The presentation used to get the conceptual understanding of Business Intelligence and Data warehousing applications. This also gives a basic knowledge about Microsoft's offerings on Business Intelligence space. Lastly but not least, it also contains some useful and uncommon SQL server programming best practices.
Big Data: Guidelines and Examples for the Enterprise Decision MakerMongoDB
This document provides an overview of a real-time directed content system that uses MongoDB, Hadoop, and MapReduce. It describes:
- The key participants in the system and their roles in generating, analyzing, and operating on data
- An architecture that uses MongoDB for real-time user profiling and content recommendations, Hadoop for periodic analytics on user profiles and content tags, and MapReduce jobs to update the profiles
- How the system works over time to continuously update user profiles based on their interactions with content, rerun analytics daily to update tags and baselines, and make recommendations based on the updated profiles
- How the system supports both real-time and long-term analytics needs through this integrated approach.
This document provides the format and contents for a term paper on the game Tic Tac Toe. It includes sections for the title page, table of contents, introduction, proposed system description and requirements, requirement analysis, system design, source code, testing, and future scope. The introduction describes the existing manual system and benefits of the new software system. The proposed system section provides more details on the problem statement, functions, and system requirements. The requirement analysis section describes the system development life cycle. The system design section includes a flowchart illustrating the game play. The source code section lists the C code used to develop the software.
This document provides the format and structure for a term paper on the game Tic Tac Toe. It includes sections for the title page, table of contents, introduction, proposed system description and requirements, requirement analysis, system design, source code, testing, and future scope. It describes the problem statement, functions to be provided by the system, and system requirements. It also includes sample source code for the game and descriptions of the requirement analysis and system design processes. The overall summary provides an outline and overview of the contents and organization of the term paper project on developing a Tic Tac Toe game.
The document discusses techniques for evolutionary database development in an agile team. It recommends that the database administrator (DBA) work closely with other roles to iteratively refactor the database schema through small, frequent changes. It also emphasizes automated testing and deployment of database changes to safely evolve the database design over time.
The document summarizes a project management system developed for Zydus Cadila Health Care Ltd. Key features of the system include reflecting accurate project status, easy task dependency definition, customized security, and enhanced communication tools. The system was developed using ASP.NET, C#, VB.NET, and SQL Server. It includes modules for managing departments, employees, projects, tasks, targets, alerts and more. The database was designed with multiple tables to store related data and ensure referential integrity.
Cloud Computing for Business - The Road to IT-as-a-ServiceJames Urquhart
This document discusses the transition from infrastructure-centric IT operations to service-centric operations driven by cloud computing. It explains that cloud computing asks IT to deliver computing resources as a set of services rather than managing individual servers, networks, and storage. This new model of service operations is comprised of application operations, service operations, and infrastructure operations. Application operations focus on consuming services through tools like multi-cloud management. Service operations identify and deliver key services with supporting capabilities. Infrastructure operations provide the common foundation through APIs and automation.
Data Modeling, Normalization, and De-Normalization | PostgresOpen 2019 | Dimi...Citus Data
As a developer using PostgreSQL one of the most important tasks you have to deal with is modeling the database schema for your application. In order to achieve a solid design, it’s important to understand how the schema is then going to be used as well as the trade-offs it involves.
As Fred Brooks said: “Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.”
In this talk we're going to see practical normalisation examples and their benefits, and also review some anti-patterns and their typical PostgreSQL solutions, including Denormalization techniques thanks to advanced Data Types.
BI Team @ LinkedIn hosted a user group meeting for MicroStrategy customers in bay area. Presentation includes information about LinkedIn, concepts of metadata driven model for business dashboards, customizations using SDK, JSP and JQUERY.
The document discusses project management techniques including PERT and CPM. It explains that PERT and CPM are used to plan, schedule, and coordinate large projects by graphically displaying project activities, estimating project duration, identifying critical activities, and determining float. The framework involves defining activities and relationships, drawing network diagrams, estimating activity times, computing the critical path, and using the network to plan and control the project. Key terms and how to draw network diagrams are also covered.
This document discusses OLAP and MDX injection attacks. It provides an overview of OLAP and how MDX is used to query multidimensional data cubes. The document then explains how MDX injections can be used to expose sensitive data by manipulating MDX queries. Specific techniques are described, such as injecting into the WITH or SELECT clauses of an MDX query to conduct partial data retrieval or blind injections.
A talk given by Julian Hyde at DataCouncil SF on April 18, 2019
How do you organize your data so that your users get the right answers at the right time? That question is a pretty good definition of data engineering — but it is also describes the purpose of every DBMS (database management system). And it’s not a coincidence that these are so similar.
This talk looks at the patterns that reoccur throughout data management — such as caching, partitioning, sorting, and derived data sets. As the speaker is the author of Apache Calcite, we first look at these patterns through the lens of Relational Algebra and DBMS architecture. But then we apply these patterns to the modern data pipeline, ETL and analytics. As a case study, we look at how Looker’s “derived tables” blur the line between ETL and caching, and leverage the power of cloud databases.
The document provides an agenda and overview for a data warehousing training on SQL Server Reporting Services (SSRS). It discusses the architecture of SSRS, using the report designer and wizard, publishing reports, and building OLAP reports. It also assigns participants to create a data warehouse using SQL Server Integration Services from an AdventureWorks database and build BI reports in SSRS.
12 things Oracle DBAs must know about SQLSolarWinds
All DBAs and developers working with Oracle databases should have a solid knowledge of SQL, including what it is (and what it isn’t), how it works (and how it doesn’t) and how best to use it.
What you don't know about SQL could be having a negative impact on your database performance. In this dynamic presentation, you’ll learn 12 things you should know about SQL, including:
*Why SQL works the way it does
*How to work with SQL’s strengths and weaknesses
*Why how you write queries matters so much
*Why database design matters so much
*What statistics are so important
And much, much more.
The document discusses eBay's data warehouse (EDW) and metadata management applications. It provides a history of eBay and overview of the EDW, which started in 2000 and is now the largest Teradata installation in the world. It describes key applications including a data flow diagram tool, data rationalization process, and JobTrack tool for monitoring ETL jobs. These applications help optimize the EDW through automated metadata analysis and management.
BitLocker is drive encryption software included with Windows that encrypts the entire contents of the drive to protect against unauthorized access to data even if the drive is removed from the device. It stores the encryption key in the computer's Trusted Platform Module (TPM) chip or on an external USB drive for added security. BitLocker requires a Trusted Platform Module version 1.2 or higher, or the ability to store the recovery key on an external drive in order to encrypt the system drive.
This document contains the syllabus for the 7th semester Computer Technology diploma program. It includes details for the Network Administration and Services course. The course aims to provide knowledge and skills related to network administration, routing protocols, network security, and link redundancy. The theory section covers topics like OSI and TCP/IP models, subnetting, routing protocols like OSPF, EIGRP, RIP, VLANs, NAT, IPv6, and redundancy protocols. The practical section includes tasks to design and simulate networks implementing these protocols and concepts using tools like Packet Tracer.
System engineering involves determining operational requirements and modeling relationships between elements like hardware, software, and people to accomplish goals. It can focus on business processes or product development. The engineering process follows a hierarchy from overall objectives to domain specifications to element implementations. It is iterative to adapt to changing needs. Business process engineering derives data, application, and technology architectures, while product engineering defines architectures and infrastructure for software, hardware, data, and people components.
Basics of Microsoft Business Intelligence and Data Integration TechniquesValmik Potbhare
The presentation used to get the conceptual understanding of Business Intelligence and Data warehousing applications. This also gives a basic knowledge about Microsoft's offerings on Business Intelligence space. Lastly but not least, it also contains some useful and uncommon SQL server programming best practices.
Big Data: Guidelines and Examples for the Enterprise Decision MakerMongoDB
This document provides an overview of a real-time directed content system that uses MongoDB, Hadoop, and MapReduce. It describes:
- The key participants in the system and their roles in generating, analyzing, and operating on data
- An architecture that uses MongoDB for real-time user profiling and content recommendations, Hadoop for periodic analytics on user profiles and content tags, and MapReduce jobs to update the profiles
- How the system works over time to continuously update user profiles based on their interactions with content, rerun analytics daily to update tags and baselines, and make recommendations based on the updated profiles
- How the system supports both real-time and long-term analytics needs through this integrated approach.
1. NN:58 IFES Project Report NN:58
1 | P a g e
DBDS DataBase Design Specialists, Inc.
"We are always 5NF."
Technical Project Report on In-Flight Entertainment System (IFES)
By
Name-Anish Hemmady
NN:58
2. NN:58 IFES Project Report NN:58
2 | P a g e
Table of Contents
Title Page No
1) Project Statement……………………………………………………………………..3
2) Functional Dependencies……………………………………………………………..4
3) Steps carried out for the Design………………………………………………………6
4) Logical Data Model (LDM)…………………………………………………………..7
5) Physical Data Model(PDM)…………………………………………………………..8
6) Relational Data Model Generated by ER-Win………………………………………..9
7) Data Loading…………………………………………………………………………13
8) Tables after Loading the Data…………………………………………………..........16
9) Queries………………………………………………………………………….........19
10) Technical Responses to Changes requested in Memo 7……………………………..23
11) Appendix……………………………………………………………………….........24
3. NN:58 IFES Project Report NN:58
3 | P a g e
1) In-Flight Entertainment System (IFES) Project Statement
The motivation for this In Flight Entertainment System (IFES) Project is to keep a record of what
Movies/Music passengers play during their Airline Flight and to collect appropriate charges.
There is currently a charge system in place to bill the passengers for works they access.In addition
to the charge back system for use of the system by passengers ,we would also study access patterns
for the various works in the 3 categories (Music album,Movie,short subject)which we will be using.
The basic premise of this study of accesspatternsis that the customer who is using the IFES system
will be able to tune or optimize the offerings which will make the system more enjoyable to the
passengers.
In short , we are going to produce a design so that access patterns for the various works can be
examined using a Database.There issome existing data which we would be using asa starting point.
The IFES Poject has many people involved in it like the Managers providing us the essential
information for the design.The project director monitoring these Project Managers.Lastly the
passengers who would be accessing the IFES system and playing a major role in our experiment.
Many Memos were handed out in detail to us which provided sample test data for designing
database.With the help of data given and using principles of Database design like Top Down and
Bottom Up approach we were able to design such a database.Identifying Non-Trivial Dependencies
to remove Logical Redundancies in the data and creating relationship among different entities was
the main aim of the project.
4. NN:58 IFES Project Report NN:58
4 | P a g e
2) Functional Dependencies
Access
No
Class Work
No
Catcode Name Date
Added
Size Loc Runtime Rating Access
Date
Access
Time
Charge Title
Usercode Name Start Date End
Date
DOB NumAcc.
6. NN:58 IFES Project Report NN:58
6 | P a g e
3) StepsCarried out for the Design
Following Steps were carried out to reach out to latest updated LDM and PDM:
1) I rectified the independent entities and kept them on top and dependent entities at bottom of
new diagram
2) As per data User information was included in tables and information about managers, director
is included in diagram.
3) PDM (physical data model) has been provided as per data given.
4) Security is provided in updated ldm and pdm through referential integrity constraints.
5) Composite primary keys are present in the ldm.
6) Referential Integrity constraints have been provided.
7) Sub type of users have been created to specialize users as passengers,manager and director.
8) All questions given by Bob bureaucrat can be answered by using the updated or revised LDM
and PDM.
9) The director can keep track on managers using his table which has been given as subtype in
revised ldm.
10) The new LDM and PDM was generated due to Functional Dependencies as per bottom up
approach given below. This gave me Normalized Tables.
11) Also Top down approach was taken into consideration while tables were created.
7. NN:58 IFES Project Report NN:58
7 | P a g e
4) Logical Data Model (LDM) LatestVersion
9. NN:58 IFES Project Report NN:58
9 | P a g e
6) Relational Data Model Generated By ER-Win
CREATE TABLE [User_info]
(
[Usercode] varchar(10) NOT NULL ,
[Name] char(15) NULL ,
[StartDate] date NULL ,
[EndDate] date NULL ,
[DOB] date NULL ,
PRIMARY KEY CLUSTERED([Usercode] ASC)
)
go
CREATE TABLE [Passenger_info]
(
[Usercode] varchar(10) NOT NULL ,
[NumAcc] integer NULL ,
PRIMARY KEY CLUSTERED([Usercode] ASC),
FOREIGN KEY ([Usercode]) REFERENCES [User_info]([Usercode])
)
go
10. NN:58 IFES Project Report NN:58
10 | P a g e
CREATE TABLE [Director]
(
[Usercode] varchar(10) NOT NULL ,
[CheckingDate] date NULL ,
PRIMARY KEY CLUSTERED([Usercode] ASC),
FOREIGN KEY ([Usercode]) REFERENCES [User_info]([Usercode])
)
go
CREATE TABLE [Manager_info]
(
[Usercode] varchar(10) NOT NULL ,
[LstAccessDate] date NULL ,
PRIMARY KEY CLUSTERED([Usercode] ASC),
FOREIGN KEY ([Usercode]) REFERENCES [User_info]([Usercode])
)
go
CREATE TABLE [Category]
(
[Catcode] char(2) NOT NULL ,
[Name] varchar(20) NULL ,
[Usercode] varchar(10) NULL ,
PRIMARY KEY CLUSTERED([Catcode] ASC),
FOREIGN KEY ([Usercode]) REFERENCES [Manager_info]([Usercode])
)
go
11. NN:58 IFES Project Report NN:58
11 | P a g e
CREATE TABLE [Classification]
(
[Class] char(8) NOT NULL ,
[Catcode] char(2) NULL ,
[Charge] decimal(4,2) NULL ,
[ClassDescription] char(25) NULL ,
PRIMARY KEY CLUSTERED([Class] ASC),
FOREIGN KEY ([Catcode]) REFERENCES [Category]([Catcode])
)
go
CREATE TABLE [Rating]
(
[Rating] char(5) NOT NULL ,
[RatingInfo] varchar(25) NULL ,
PRIMARY KEY CLUSTERED([Rating] ASC)
)
go
12. NN:58 IFES Project Report NN:58
12 | P a g e
CREATE TABLE [Works_info]
(
[WorkNo] integer NOT NULL ,
[Class] char(8) NOT NULL ,
[Title] char(25) NULL ,
[DateAdded] date NULL ,
[Size] integer NULL ,
[Loc] char(2) NULL ,
[Runtime] integer NULL ,
[Rating] char(5) NULL ,
PRIMARY KEY CLUSTERED([WorkNo] ASC,[Class] ASC),
FOREIGN KEY ([Class]) REFERENCES [Classification]([Class]),
FOREIGN KEY ([Rating]) REFERENCES [Rating]([Rating])
)
go
CREATE TABLE [Access_info]
(
[AccessNo] integer NOT NULL ,
[WorkNo] integer NOT NULL ,
[Class] char(8) NOT NULL ,
[Usercode] varchar(10) NULL ,
[AccessTime] integer NULL ,
[AccessDate] date NULL ,
PRIMARY KEY CLUSTERED([AccessNo] ASC,[WorkNo] ASC,[Class] ASC),
FOREIGN KEY ([WorkNo],[Class]) REFERENCES [Works_info]([WorkNo],[Class]),
FOREIGN KEY ([Usercode]) REFERENCES [User_info]([Usercode])
)
Go
13. NN:58 IFES Project Report NN:58
13 | P a g e
7) Data Loading
---Insert Statement For User_info Table------
use IFES58;
INSERT INTO User_info VALUES('MC1014','John Doe','2008-12-11',null,'1988-01-11')
INSERT INTO User_info VALUES('MC1011','Mary Smith','2011-03-11',null,'1976-07-04');
INSERT INTO User_info VALUES('MC1009','Mike Smith','2011-01-23',null,'1975-02-02');
INSERT INTO User_info VALUES('MC1035','Ann Black','2011-03-12','2011-03-16','1972-06-17');
INSERT INTO User_info VALUES('MC0800','Ken White','1999-03-12','2011-03-30','1966-07-07');
INSERT INTO User_info VALUES('MC1039','Jane Dough','2011-03-14','2011-03-14','1990-12-25');
INSERT INTO User_info VALUES('MC1040','Karen Kline','2011-03-14','2011-03-16','1988-05-05');
INSERT INTO User_info VALUES('MC1045','Jack Jones','2001-01-22',null,'1980-07-23');
INSERT INTO User_info VALUES('MC1022','Bill Doe','2011-03-15',null,'2001-01-01');
INSERT INTO User_info VALUES('MC1023','Kelly Doe','2011-03-15',null,'2006-06-06');
INSERT INTO User_info VALUES('MC0900','James Mason','1999-05-22',null,'1980-02-23');
INSERT INTO User_info VALUES('MC1065','Joe Gray','2011-01-13',null,'1995-11-23');
INSERT INTO User_info VALUES('MC0000','Bob Bureaucrat',null,null,null);
INSERT INTO User_info VALUES('MC0101','Sam Supervisor',null,null,null);
INSERT INTO User_info VALUES('MC0102','Mary Manager',null,null,null);
INSERT INTO User_info VALUES('MC0103','Fred Foreman',null,null,null);
---INSERT STATEMENT FOR Passenger_info Table----
use IFES58;
INSERT INTO Passenger_info VALUES('MC1014',6);
INSERT INTO Passenger_info VALUES('MC1011',2);
INSERT INTO Passenger_info VALUES('MC1009',3);
INSERT INTO Passenger_info VALUES('MC1035',2);
INSERT INTO Passenger_info VALUES('MC0800',1);
INSERT INTO Passenger_info VALUES('MC1039',1);
INSERT INTO Passenger_info VALUES('MC1040',1);
INSERT INTO Passenger_info VALUES('MC1045',2);
INSERT INTO Passenger_info VALUES('MC1022',1);
INSERT INTO Passenger_info VALUES('MC1023',1);
INSERT INTO Passenger_info VALUES('MC0900',0);
INSERT INTO Passenger_info VALUES('MC1065',0);
---INSERT Statement For Manager_info Table-------
INSERT INTO Manager_info VALUES('MC0101',null);
INSERT INTO Manager_info VALUES('MC0102',null);
INSERT INTO Manager_info VALUES('MC0103',null);
---INSERT Statement For Director Table-----
INSERT INTO Director VALUES('MC0000',null);
14. NN:58 IFES Project Report NN:58
14 | P a g e
---INSERT Statement For Category Table------
INSERT INTO Category VALUES('MU','Music Album','MC0101');
INSERT INTO Category VALUES('MO','Movie','MC0102');
INSERT INTO Category VALUES('SS','Short Subject','MC0103');
---INSERT Statement For Classification Table----
INSERT INTO Classification VALUES('SS','MU',3,'Show Tunes');
INSERT INTO Classification VALUES('Dram','SS',4,'Drama');
INSERT INTO Classification VALUES('Fan','MO',5,'Fantasy');
INSERT INTO Classification VALUES('SiFi','MO',5,'Science Fiction');
INSERT INTO Classification VALUES('Folk','MU',2,'Folk Tunes');
---INSERT Statement For Rating Table-----
INSERT INTO Rating VALUES('NR','Not Rated');
INSERT INTO Rating VALUES('PG','Age ? to 12');
INSERT INTO Rating VALUES('PG13','Age > 12');
INSERT INTO Rating VALUES('R','Age > 17');
INSERT INTO Rating VALUES('MA','Age > 21');
INSERT INTO Rating VALUES('G','General');
-------INSERT Statement For Works_info Table------------
INSERT INTO Works_info VALUES(05,'SS','Annie','2013-03-01',600,'E',80,'NR');
INSERT INTO Works_info VALUES(02,'Dram','West Wing','2013-02-01',590,'D',53,'NR');
INSERT INTO Works_info VALUES(01,'Fan','Harry Potter','2014-01-01',4000,'D',152,'PG');
INSERT INTO Works_info VALUES(03,'Fan','Phantom of the Opera','2015-05-
01',3500,'D',141,'PG13');
INSERT INTO Works_info VALUES(06,'SiFi','King Kong','2015-07-01',2500,'D',100,'NR');
INSERT INTO Works_info VALUES(10,'SiFi','War of the Worlds','2015-06-
01',2800,'D',117,'PG13');
INSERT INTO Works_info VALUES(04,'Folk','Carpenters Gold','2014-11-01',680,'E',95,'NR');
INSERT INTO Works_info VALUES(07,'SS','J.C.Superstar','2014-01-01',590,'E',53,'NR');
INSERT INTO Works_info VALUES(13,'Fan','Alice in Wonderland','2015-03-01',2700,'D',109,'PG');
INSERT INTO Works_info VALUES(14,'SiFi','Galaxy Quest','2014-01-01',2500,'D',102,'PG');
INSERT INTO Works_info VALUES(09,'SiFi','War of the Worlds','2014-01-01',3800,'D',97,'R');
INSERT INTO Works_info VALUES(08,'SS','Phantom of the Opera','2014-01-01',627,'E',78,'NR');
INSERT INTO Works_info VALUES(07,'Fan','Blazing Saddles','2015-04-01',3000,'D',93,'R');
INSERT INTO Works_info VALUES(11,'Fan','Alice in Wonderland','2014-07-01',2500,'D',75,'G');
INSERT INTO Works_info VALUES(12,'Fan','Alice in Wonderland','2014-07-01',2900,'D',83,'MA');
INSERT INTO Works_info VALUES(04,'SiFi','West World','2014-07-01',2000,'D',88,'PG');
INSERT INTO Works_info VALUES(01,'Folk','James Taylor(album)','2014-06-
01',2000,'E',152,'NR');
INSERT INTO Works_info VALUES(01,'Dram','Avengers','2014-06-01',2000,'E',52,'NR');
15. NN:58 IFES Project Report NN:58
15 | P a g e
----INSERT Statement For Access_info Table-------
use IFES58;
INSERT INTO Access_info VALUES(77,05,'SS',null,43,'2015-03-03');
INSERT INTO Access_info VALUES(20,02,'Dram',null,42,'2015-03-10');
INSERT INTO Access_info VALUES(21,01,'Fan','MC1014',152,'2015-03-10');
INSERT INTO Access_info VALUES(33,03,'Fan','MC1014',43,'2015-03-11');
INSERT INTO Access_info VALUES(57,06,'SiFi','MC1011',100,'2015-03-12');
INSERT INTO Access_info VALUES(54,10,'SiFi','MC1009',115,'2015-03-12');
INSERT INTO Access_info VALUES(98,04,'Folk','MC1009',37,'2015-03-13');
INSERT INTO Access_info VALUES(30,07,'SS','MC1009',92,'2015-03-13');
INSERT INTO Access_info VALUES(80,10,'SiFi','MC1035',45,'2015-03-13');
INSERT INTO Access_info VALUES(44,13,'Fan','MC1035',85,'2015-03-15');
INSERT INTO Access_info VALUES(45,01,'Fan','MC0800',152,'2015-03-15');
INSERT INTO Access_info VALUES(82,14,'SiFi','MC1039',95,'2015-03-15');
INSERT INTO Access_info VALUES(43,09,'SiFi','MC1014',60,'2015-03-15');
INSERT INTO Access_info VALUES(54,08,'SS','MC1014',25,'2015-03-15');
INSERT INTO Access_info VALUES(60,07,'Fan','MC1040',95,'2015-03-15');
INSERT INTO Access_info VALUES(91,11,'Fan','MC1045',60,'2015-03-15');
INSERT INTO Access_info VALUES(95,12,'Fan','MC1045',95,'2015-03-16');
INSERT INTO Access_info VALUES(98,04,'SiFi','MC1011',75,'2015-03-16');
INSERT INTO Access_info VALUES(91,01,'Folk','MC1014',152,'2015-03-16');
INSERT INTO Access_info VALUES(82,01,'Dram','MC1014',25,'2015-03-16');
16. NN:58 IFES Project Report NN:58
16 | P a g e
8) Table After Loading the Data
use IFES58;
SELECT * FROM User_info;
Usercode Name StartDate EndDate DOB
---------- --------------- ---------- ---------- ----------
MC0000 Bob Bureaucrat NULL NULL NULL
MC0101 Sam Supervisor NULL NULL NULL
MC0102 Mary Manager NULL NULL NULL
MC0103 Fred Foreman NULL NULL NULL
MC0800 Ken White 1999-03-12 2011-03-30 1966-07-07
MC0900 James Mason 1999-05-22 NULL 1980-02-23
MC1009 Mike Smith 2011-01-23 NULL 1975-02-02
MC1011 Mary Smith 2011-03-11 NULL 1976-07-04
MC1014 John Doe 2008-12-11 NULL 1988-01-11
MC1022 Bill Doe 2011-03-15 NULL 2001-01-01
MC1023 Kelly Doe 2011-03-15 NULL 2006-06-06
MC1035 Ann Black 2011-03-12 2011-03-16 1972-06-17
MC1039 Jane Dough 2011-03-14 2011-03-14 1990-12-25
MC1040 Karen Kline 2011-03-14 2011-03-16 1988-05-05
MC1045 Jack Jones 2001-01-22 NULL 1980-07-23
MC1065 Joe Gray 2011-01-13 NULL 1995-11-23
(16 row(s) affected)
use IFES58;
SELECT * FROM Manager_info;
Usercode LstAccessDate
---------- -------------
MC0101 NULL
MC0102 NULL
MC0103 NULL
(3 row(s) affected)
use IFES58;
SELECT * FROM Passenger_info;
Usercode NumAcc
---------- -----------
MC0800 1
MC0900 0
MC1009 3
MC1011 2
MC1014 6
MC1022 1
MC1023 1
MC1035 2
MC1039 1
MC1040 1
MC1045 2
MC1065 0
(12 row(s) affected)
17. NN:58 IFES Project Report NN:58
17 | P a g e
use IFES58;
SELECT * FROM Director;
Usercode CheckingDate
---------- ------------
MC0000 NULL
(1 row(s) affected)
use IFES58;
SELECT * FROM Category;
Catcode Name Usercode
------- -------------------- ----------
MO Movie MC0102
MU Music Album MC0101
SS Short Subject MC0103
(3 row(s) affected)
use IFES58;
SELECT * FROM Classification;
Class Catcode Charge ClassDescription
-------- ------- --------------------------------------- -------------------------
Dram SS 4.00 Drama
Fan MO 5.00 Fantasy
Folk MU 2.00 Folk Tunes
SiFi MO 5.00 Science Fiction
SS MU 3.00 Show Tunes
(5 row(s) affected)
use IFES58;
SELECT * FROM Rating;
Rating RatingInfo
------ -------------------------
G General
MA Age > 21
NR Not Rated
PG Age ? to 12
PG13 Age > 12
R Age > 17
(6 row(s) affected)
18. NN:58 IFES Project Report NN:58
18 | P a g e
use IFES58;
SELECT * FROM Works_info;
WorkNo Class Title DateAdded Size Loc Runtime Rating
----------- -------- ------------------------- ---------- ----------- ---- ----------- ------
1 Dram Avengers 2014-06-01 2000 E 52 NR
1 Fan Harry Potter 2014-01-01 4000 D 152 PG
1 Folk James Taylor(album) 2014-06-01 2000 E 152 NR
2 Dram West Wing 2013-02-01 590 D 53 NR
3 Fan Phantom of the Opera 2015-05-01 3500 D 141 PG13
4 Folk Carpenters Gold 2014-11-01 680 E 95 NR
4 SiFi West World 2014-07-01 2000 D 88 PG
5 SS Annie 2013-03-01 600 E 80 NR
6 SiFi King Kong 2015-07-01 2500 D 100 NR
7 Fan Blazing Saddles 2015-04-01 3000 D 93 R
7 SS J.C.Superstar 2014-01-01 590 E 53 NR
8 SS Phantom of the Opera 2014-01-01 627 E 78 NR
9 SiFi War of the Worlds 2014-01-01 3800 D 97 R
10 SiFi War of the Worlds 2015-06-01 2800 D 117 PG13
11 Fan Alice in Wonderland 2014-07-01 2500 D 75 G
12 Fan Alice in Wonderland 2014-07-01 2900 D 83 MA
13 Fan Alice in Wonderland 2015-03-01 2700 D 109 PG
14 SiFi Galaxy Quest 2014-01-01 2500 D 102 PG
(18 row(s) affected)
use IFES58;
SELECT * FROM Access_info;
AccessNo WorkNo Class Usercode AccessTime AccessDate
----------- ----------- -------- ---------- ----------- ----------
20 2 Dram NULL 42 2015-03-10
21 1 Fan MC1014 152 2015-03-10
30 7 SS MC1009 92 2015-03-13
33 3 Fan MC1014 43 2015-03-11
43 9 SiFi MC1014 60 2015-03-15
44 13 Fan MC1035 85 2015-03-15
45 1 Fan MC0800 152 2015-03-15
54 8 SS MC1014 25 2015-03-15
54 10 SiFi MC1009 115 2015-03-12
57 6 SiFi MC1011 100 2015-03-12
60 7 Fan MC1040 95 2015-03-15
77 5 SS NULL 43 2015-03-03
80 10 SiFi MC1035 45 2015-03-13
82 1 Dram MC1014 25 2015-03-16
82 14 SiFi MC1039 95 2015-03-15
91 1 Folk MC1014 152 2015-03-16
91 11 Fan MC1045 60 2015-03-15
95 12 Fan MC1045 95 2015-03-16
98 4 Folk MC1009 37 2015-03-13
98 4 SiFi MC1011 75 2015-03-16
(20 row(s) affected)
19. NN:58 IFES Project Report NN:58
19 | P a g e
9) Queries
a) What is the longest access time for a Science Fiction work?
use IFES58;
SELECT MAX(AccessTime) as max_access_time from Access_info where Class='SiFi';
max_access_time
---------------
115
(1 row(s) affected)
b) For each access,list all of its access times (along with its work number) in chronological (date)
order ?
use IFES58;
SELECT AccessNo,AccessTime,WorkNo,AccessDate FROM Access_info ORDER BY
AccessNo,AccessDate;
AccessNo AccessTime WorkNo AccessDate
----------- ----------- ----------- ----------
20 42 2 2015-03-10
21 152 1 2015-03-10
30 92 7 2015-03-13
33 43 3 2015-03-11
43 60 9 2015-03-15
44 85 13 2015-03-15
45 152 1 2015-03-15
54 115 10 2015-03-12
54 25 8 2015-03-15
57 100 6 2015-03-12
60 95 7 2015-03-15
77 43 5 2015-03-03
80 45 10 2015-03-13
82 95 14 2015-03-15
82 25 1 2015-03-16
91 60 11 2015-03-15
91 152 1 2015-03-16
95 95 12 2015-03-16
98 37 4 2015-03-13
98 75 4 2015-03-16
(20 row(s) affected)
20. NN:58 IFES Project Report NN:58
20 | P a g e
c) What is the full name of Category SS ?
use IFES58;
SELECT Name AS fullname_SS FROM Category WHERE Catcode='SS';
fullname_SS
--------------------
Short Subject
(1 row(s) affected)
d) List the access times that are within 5 minutes of the run time for the work ?
use IFES58;
SELECT A.AccessTime,ABS(A.AccessTime-W.Runtime) AS Abs_Time_taken FROM
Access_info A,Works_info W WHERE A.WorkNo=W.WorkNo AND A.Class=W.Class AND
ABS(A.AccessTime-W.Runtime)<=5;
AccessTime Abs_Time_taken
----------- -----------------
152 0
152 0
115 2
100 0
95 2
152 0
(6 row(s) affected)
e) List the access information for movies accessed in March 2015 ?
use IFES58;
SELECT A.AccessNo,A.WorkNo,A.Class,A.Usercode,A.AccessTime,A.AccessDate FROM
Access_info A,Classification C,Category Ca WHERE A.Class=C.Class AND
C.Catcode=Ca.Catcode ANDCa.Name='Movie' AND A.AccessDate BETWEEN'2015-03-
01'AND '2015-03-31';
AccessNo WorkNo Class Usercode AccessTime AccessDate
----------- ----------- -------- ---------- ----------- ----------
21 1 Fan MC1014 152 2015-03-10
33 3 Fan MC1014 43 2015-03-11
43 9 SiFi MC1014 60 2015-03-15
44 13 Fan MC1035 85 2015-03-15
45 1 Fan MC0800 152 2015-03-15
54 10 SiFi MC1009 115 2015-03-12
57 6 SiFi MC1011 100 2015-03-12
60 7 Fan MC1040 95 2015-03-15
80 10 SiFi MC1035 45 2015-03-13
82 14 SiFi MC1039 95 2015-03-15
91 11 Fan MC1045 60 2015-03-15
95 12 Fan MC1045 95 2015-03-16
98 4 SiFi MC1011 75 2015-03-16
21. NN:58 IFES Project Report NN:58
21 | P a g e
(13 row(s) affected)
f) What categories contain works added to the system in January 2014 ?
use IFES58;
SELECT Catcode From Category WHERE Catcode IN(SELECT Catcode FROM Classification
WHERE Class IN(SELECT Class FROM Works_info WHERE DateAdded='2014-01-01') );
Catcode
-------
MO
MU
(2 row(s) affected)
g) Where are the Short Subjects stored on the system ?
use IFES58;
SELECT W.Loc AS Location_ShortSubject_Stored FROM Works_info W,Classification
CL,Category C WHERE W.Class=CL.Class AND CL.Catcode=C.Catcode AND C.Name='Short
Subject';
Location_ShortSubject_Stored
----------------------------
E
D
(2 row(s) affected)
h) Who (Name and MC code) manages the Movie category ?
use IFES58;
SELECT U.Name,U.Usercode FROM User_info U,Manager_info M,Category C WHERE
U.Usercode=M.Usercode AND M.Usercode=C.Usercode AND C.Name='Movie';
Name Usercode
--------------- ----------
Mary Manager MC0102
(1 row(s) affected)
i) Who (Name and MC code) has access to data about the Music Category ?
use IFES58;
SELECT U.Name,U.Usercode FROM User_info U,Manager_info M,Category C WHERE
U.Usercode=M.Usercode AND M.Usercode=C.Usercode AND C.Name='Music Album';
Name Usercode
--------------- ----------
Sam Supervisor MC0101
(1 row(s) affected)
22. NN:58 IFES Project Report NN:58
22 | P a g e
j) How many accesses are charged to each passenger (Name and MC code) ?
use IFES58;
SELECT U.Usercode,U.Name,COUNT(U.Usercode) as Number_of_accesses FROM User_info
U, Access_info A WHERE U.Usercode=A.Usercode GROUP BY U.Usercode,U.Name;
Usercode Name Number_of_accesses
---------- --------------- ------------------
MC0800 Ken White 1
MC1009 Mike Smith 3
MC1011 Mary Smith 2
MC1014 John Doe 6
MC1035 Ann Black 2
MC1039 Jane Dough 1
MC1040 Karen Kline 1
MC1045 Jack Jones 2
(8 row(s) affected)
k) List the accesses made by MC1014 in date order ?
use IFES58;
SELECT AccessNo,WorkNo,Class,Usercode,AccessTime,AccessDate FROM Access_info
WHERE Usercode='MC1014'ORDER BY AccessDate;
AccessNo WorkNo Class Usercode AccessTime AccessDate
----------- ----------- -------- ---------- ----------- ----------
21 1 Fan MC1014 152 2015-03-10
33 3 Fan MC1014 43 2015-03-11
43 9 SiFi MC1014 60 2015-03-15
54 8 SS MC1014 25 2015-03-15
82 1 Dram MC1014 25 2015-03-16
91 1 Folk MC1014 152 2015-03-16
(6 row(s) affected)
23. NN:58 IFES Project Report NN:58
23 | P a g e
10) Technical Responses to Changes requested in Memo 7
1: NewData From Bob
If we make Usercode as primary key in Access_info table along with work no,class,access no
then we can insert new data but there are 2 rows in accesstable indicating null information which
might be old data which we would want to remove it,otherwise it may cause problems.
2:Information From Sam and Mary
a) MA rated version of Alice in Wonderland can be taken off the system when deletion is
performed properly.There are 3 rows in Works_info table that indicate title Alice in
Wonderland.The Access_infoTable is using the attributes of Works_info asprimary key.So
to delete this MA rated version of Alice in Wonderland one has to delete first tuple from
Access_info(i.e child table) and then perform Deletion From Works_info (i.e.ParentTable).
b) The Classification table can be updated with the change of Show Tunes value to $2.50.It can
be easily updated.
3:Data items From Bob that look Different
Yes,these last minute changes would cause problems.Here Folk class charge is $3 whereas its $2
for folk in our current database.To make our database consistent we have tu update folk class
charge value to $3.
For the first row of the table in the given data by bob we would have to delete earlier data
corresponding to work no 01 then only we can insert new data with updated folk charge.
Second row can be easily inserted without much issue but needing only update of folk class
charge value to $3.
4:Number of Accesses Field
Number of Accessesfield can be removed From Passenger_info table,without causing any
Problem.We can use COUNT keyword to count number of accessesmade by passenger by
counting the usercode in Access_info table.
24. NN:58 IFES Project Report NN:58
24 | P a g e
11) Appendix
Previous Drafts of LDM and PDM and FD’s.
Version 1 of the LDM
25. NN:58 IFES Project Report NN:58
25 | P a g e
Version 1 FD(s)
XY (Y)
Class
catcode Name Date
Added
Size Loc Runtime Rating Access
No
Access
Date
Access
Time
Charge Work
No
Title
(X)
Class
Y(t) Y Y 1&8 1&8 Y 1&8 4&10 1&8 1&8 1&8 Y 1&8 1&8
catcode 1&7 Y(t) Y 3&4 1&8 Y 3&9 3&4 1&8 3&4 1&8 7&8 1&8 1&8
Name 7&8 Y Y(t) 1&8 3&4 Y 1&8 3&4 1&8 1&8 1&8 7&8 1&8 1&8
Date
Added
3&8 3&8 3&8 Y(t) 3&8 3&8 3&8 3&8 3&8 3&8 3&8 3&8 3&8 3&8
Size 2&8 2&8 2&8 2&8 Y(t) 2&8 Y Y 2&8 2&8 2&8 2&8 2&8 2&8
Loc 1&7 2&3 2&3 1&7 1&7 Y(t) 1&7 2&3 1&7 1&7 1&7 2&3 1&7 1&7
Runtime 2&8 2&8 2&8 2&8 Y 2&8 Y(t) Y 2&8 2&8 2&8 2&8 2&8 2&8
Rating 1&2 1&2 1&2 1&2 1&2 1&2 1&2 Y(t) 1&2 1&2 1&2 1&2 1&2 1&2
Access
No
Y Y Y Y Y Y Y Y Y(t) Y Y Y Y Y
Access
Date
2&3 2&3 2&3 2&3 2&3 8&9 2&3 2&3 2&3 Y(t) 2&3 2&3 2&3 2&3
Access
Time
1&4 1&4 1&4 1&4 1&4 1&4 1&4 1&4 1&4 1&4 Y(t) 1&4 1&4 1&4
Charge 4&5 Y Y 1&8 1&8 Y 1&8 3&4 1&8 1&8 1&8 Y(t) 1&8 1&8
Work
No
Y Y Y Y Y Y Y Y 3&11 3&11 6&9 Y Y(t) Y
Title Y Y Y Y Y Y Y Y 3&11 3&11 6&9 Y Y Y(t)