MAKAUT BCA DBMS Previous Year Questions (PYQ) with Solutions

This page provides a comprehensive collection of previous year questions (PYQ) from MAKAUT's BCA Data Structures and Algorithm (DBMS) exams. Many questions are frequently repeated, making this resource essential for exam preparation. It includes key theory questions with detailed answers that can be easily memorized or conceptually understood. Whether you're revising for upcoming exams or looking for solved papers, this resource is designed to help you excel in your DBMS subject.

View the answers in google docs.

For any special request contact us at makaut.org/contact.

Q1. Define super key, candidate key and primary key?

Ans: Super Key, Candidate Key, and Primary Key are important concepts in database management systems (DBMS).

  1. Super Key: A super key is a set of one or more attributes (columns) that can uniquely identify a record in a table. It can have additional attributes that are not necessary for unique identification. Example: In a student table, if you have Roll_No, Name, and Phone_Number, then {Roll_No}, {Roll_No, Name}, and {Roll_No, Phone_Number} are all super keys.
  2. Candidate Key: A candidate key is a minimal super key, meaning it has no unnecessary attributes. It is the smallest possible combination of attributes that can still uniquely identify a record. Example: If {Roll_No} is enough to uniquely identify a student, it is a candidate key. {Roll_No, Name} is a super key but not a candidate key because Name is unnecessary.
  3. Primary Key: A primary key is a candidate key that is chosen by the database designer as the main key to uniquely identify records in a table. It cannot have NULL values and must be unique. Example: If Roll_No is chosen as the primary key, it will uniquely identify every student in the table.

Q2. Define 3 schema architecture in DBMS?

Ans: The three-schema architecture is a framework in DBMS that helps in abstracting and managing data effectively by dividing it into three levels: internal, conceptual, and external.

  1. Internal Schema (Physical Level):
    The internal schema is the lowest level that describes the physical storage of data. It focuses on how data is stored, organized, and accessed in memory, considering storage efficiency and optimization techniques like indexing and file management.
  2. Conceptual Schema (Logical Level):
    The conceptual schema represents the entire database's logical view. It defines what data is stored and how different data elements are related to each other, such as tables, attributes, and relationships. This level hides the complexities of the physical storage and provides a unified view of the data structure.
  3. External Schema (View Level):
    The external schema defines various user-specific views of the database. Different users can have customized views based on their requirements. This level provides tailored access to the database by hiding certain data or presenting only relevant information to specific users.

Each schema level ensures data abstraction and separation, which simplifies the management of complex database systems.

Q3.Explain 2 phase locking protocol?

Ans: The two-phase locking (2PL) protocol is a method in DBMS that helps control the order of transactions to avoid problems like data conflicts and ensure that transactions happen in a consistent way. It works in two phases: growing and shrinking.

  1. Growing Phase:
    In this phase, the transaction can keep on acquiring (getting) locks on the data it needs. A lock is a way of securing data so that no other transaction can change it at the same time. During this growing phase, the transaction can get more locks as needed, but it cannot release (give up) any locks.
  2. Shrinking Phase:
    Once the transaction has acquired all the locks it needs, it enters the shrinking phase. In this phase, it starts releasing the locks it holds. After entering this phase, the transaction cannot get any new locks; it can only release the locks it already has.

By following this two-phase process, the protocol ensures that transactions happen one after another without causing confusion or errors. This makes the database operations more reliable and consistent.

Q4.What is NULL? What is its importance?

Ans: In DBMS, NULL represents a value that is missing, unknown, or not applicable. It is different from zero or an empty string; NULL means no data is stored in that field.

Importance of NULL:

  1. Represents Missing Data:
    NULL is useful when data is not available or not provided. For example, in a student database, if a student’s phone number is unknown, NULL can be used instead of leaving it blank.
  2. Indicates Inapplicable Values:
    Sometimes, a field may not apply to all records. For instance, in an employee database, some employees might not have a middle name. NULL can represent such cases.
  3. Maintains Data Integrity:
    NULL helps in maintaining consistency in databases by providing a standard way to represent missing or irrelevant data, ensuring data integrity and accuracy.
  4. Supports Conditional Queries:
    In SQL queries, conditions involving NULL are handled separately using IS NULL or IS NOT NULL, helping in filtering and analysing incomplete data effectively.

Q5.How to represent a weak entity set in an ER diagram? Quote simple suitable example?

Ans: In an ER (Entity-Relationship) diagram, a weak entity set is represented by:

  1. Double Rectangles: The weak entity itself is represented by a rectangle with double lines.
  2. Double Diamonds: The relationship between the weak entity and its identifying strong entity is represented by a diamond with double lines.
  3. Partial Key (Dashed Underline): The weak entity’s attribute that partially identifies it is underlined using a dashed line.

Simple Example:
Consider a scenario where “Dependents” (like spouse or children) are linked to an “Employee”. The Dependents cannot be uniquely identified without knowing which Employee they belong to.

In the ER diagram:

This setup visually distinguishes a weak entity set in an ER diagram while showing its dependency on a strong entity.

Q6.Draw a diagram for traditional “library management system” ?

Ans:

Here is the ER diagram for a Library Management System that includes key entities like Library, Books, Members, Staff, and Transactions. The relationships and attributes are clearly represented to illustrate how they interact in the system.

Q7.Consider the following schema :Employee_Salary (EmpNo, EName, Dept, DOB, Salary)

Write SQL to perform the following:

  1. Display the number of employees in each department.

  2. Display the total and average salaries of employees in "Computer Science" department. -Display the sum of salaries for all departments.

  3. Display the highest and lowest salary for "Computer Science" department.

  4. Display the names of those employees whose name starts with "A".

Ans: SQL Queries for Employee_Salary Schema

Given Schema: Employee_Salary (EmpNo, EName, Dept, DOB, Salary)

1. Display the number of employees in each department:

sql

SELECT Dept, COUNT(*) AS Number_of_Employees

FROM Employee_Salary

GROUP BY Dept;

2. Display the total and average salaries of employees in the "Computer Science" department:

sql

SELECT SUM(Salary) AS Total_Salary, AVG(Salary) AS Average_Salary

FROM Employee_Salary

WHERE Dept = 'Computer Science';

3. Display the sum of salaries for all departments:

sql

SELECT Dept, SUM(Salary) AS Total_Salary

FROM Employee_Salary

GROUP BY Dept;

4. Display the highest and lowest salary for the "Computer Science" department:

sql

SELECT MAX(Salary) AS Highest_Salary, MIN(Salary) AS Lowest_Salary

FROM Employee_Salary

WHERE Dept = 'Computer Science';

5. Display the names of those employees whose name starts with "A":

sql

SELECT EName

FROM Employee_Salary

WHERE EName LIKE 'A%';

Q8.Define 2NF and 3NF?

Ans: A table is said to be in Second Normal Form (2NF) if it meets the criteria for First Normal Form (1NF) and, in addition, every non-key attribute is fully functionally dependent on the entire primary key. This means that if the primary key is a composite key (made up of more than one column), then all the attributes in the table must depend on all parts of this composite key, not just some of them.

For example: Consider a table with the primary key consisting of StudentID and CourseID. If there is an attribute like Grade, it should depend on both StudentID and CourseID together. If Grade depended only on StudentID, then the table would not be in 2NF.

3NF (Third Normal Form):

A table is in Third Normal Form (3NF) if it is in Second Normal Form (2NF) and, in addition, there are no transitive dependencies. This means that non-key attributes should not depend on other non-key attributes. In simpler terms, every non-key attribute must be directly dependent on the primary key and not on other non-key attributes.

For example: In a table where StudentID is the primary key, if there is a non-key attribute like DepartmentName which depends on another non-key attribute DepartmentID, then the table is not in 3NF. Instead, DepartmentName should directly depend on StudentID or be moved to a separate table.

Q9.Write a short note on file indexing?

Ans: File indexing is a technique used in databases to speed up data retrieval by creating a data structure that allows for quick access to records. Instead of scanning through the entire file, the database can use the index to find data more efficiently.

Purpose: The main purpose of indexing is to make data retrieval faster. An index provides a faster way to locate data by creating a sorted list of keys and their corresponding pointers to the actual data.

Structure: An index typically uses a hierarchical or tree-like structure, such as a B-tree or hash table. This structure helps in quickly finding the data without scanning the whole file.

Types of Indexes:

Benefits: Indexing enhances the speed of data retrieval and improves performance for searching, sorting, and filtering operations.

Trade-offs: While indexing speeds up read operations, it requires additional storage space and can slow down write operations, such as inserts, updates, and deletes, because the index needs to be updated.

Q10.Write a short note on query optimization technique?

Query optimization is a critical process in database management systems (DBMS) aimed at improving the performance of SQL queries. The objective is to execute queries in the most efficient manner possible, minimizing resource usage and reducing the time required to retrieve data.

Techniques:

  1. Indexing: Creating indexes on columns that are frequently used in search conditions or joins can significantly speed up query performance. Indexes provide a faster way to access data by allowing the database to locate the required information quickly.
  2. Query Restructuring: Sometimes, rewriting a query can improve its performance. This may involve simplifying complex queries, using subqueries effectively, or eliminating unnecessary calculations to make the query more efficient.
  3. Execution Plan Analysis: The DBMS generates an execution plan for each query, which outlines how the data will be retrieved. Analyzing this plan helps in identifying any performance issues and selecting the best strategy for executing the query.
  4. Database Statistics: Keeping accurate and up-to-date database statistics is essential for the query optimizer. These statistics help the optimizer make informed decisions about the most efficient way to execute a query.
  5. Join Optimization: When a query involves multiple tables, choosing the appropriate join method (e.g., nested loop, hash join, or merge join) and ordering the joins efficiently can improve query performance.

Q11.Write a short note on Armstrong's axioms?

Ans: Armstrong's axioms are a set of rules used in databases to help understand and work with functional dependencies. These rules are essential for organizing and normalizing databases to make them more efficient and reduce redundancy.

Here are the three basic axioms:

  1. Reflexivity: If a set of attributes YYY is part of another set XXX, then XXX can determine YYY. For example, if you have a set of attributes {StudentID, Course}, then {StudentID} can determine {StudentID}, {Course}, or both because {StudentID} is part of the set.
  2. Augmentation: If XXX determines YYY, then adding extra attributes to both XXX and YYY will still hold true. For example, if {StudentID} determines {Name}, then {StudentID, Course} will determine {Name, Course}. Adding more attributes to both sides of the dependency doesn’t change the fact that XXX still determines YYY.
  3. Transitivity: If XXX determines YYY and YYY determines ZZZ, then XXX will also determine ZZZ. For example, if {StudentID} determines {Course} and {Course} determines {Instructor}, then {StudentID} will determine {Instructor} through {Course}.

Q12.Write a short note on Network data model?

Ans: The Network Data Model is a database model that represents data as a network of interconnected records. This model allows for more complex relationships between data elements compared to the hierarchical model, supporting many-to-many relationships and providing greater flexibility in data management.

Key Features:

Advantages:

Disadvantages:

Q13.What is Fragmentation ? How many types are there ? Briefly explain each of them with suitable examples.

Ans: In a database, fragmentation means breaking a large table into smaller pieces. These smaller parts are easier to manage and store across different locations. The goal is to improve performance, availability, and efficiency.

Types of Fragmentation:

There are three main types of fragmentation:

  1. Horizontal Fragmentation:
  1. Vertical Fragmentation:
  1. Mixed (Hybrid) Fragmentation:

This method of breaking down large databases helps in better query performance and resource management.

Q14.Explain the basic principle of time stamp protocol.

Ans: Basic Principle of Timestamp Protocol:

The Timestamp Protocol is used in DBMS to control concurrency. It ensures that transactions are executed in a safe order, preventing conflicts.

How it Works:

Each transaction is given a unique timestamp when it starts. The timestamp reflects when the transaction began. For each data item, there are two timestamps:

The protocol uses these timestamps to order transactions. Transactions with smaller timestamps (older transactions) are given priority over those with larger timestamps (newer transactions).

Rules for Reading and Writing:

For a read operation, a transaction is allowed to read a data item only if its write timestamp is older than the transaction’s timestamp. If the transaction is newer than the last write, it can proceed.

For a write operation, a transaction can write to a data item if both the read and write timestamps of that item are older than the transaction’s timestamp. This ensures no newer transaction has accessed or modified the data.

Example:

If Transaction T1 has a timestamp of 10 and Transaction T2 has a timestamp of 15, T1 will be prioritized. If T2 tries to write to a data item that T1 has already accessed, T2 may be rolled back to prevent inconsistency.

Q15.Write a short note on Network data model?

Ans: In a database system, deadlock happens when two or more transactions block each other indefinitely. Each transaction is waiting for a resource that is held by another transaction, and no one can proceed.

Causes of Deadlock in a Database System:

  1. Mutual Exclusion:
    In a database, resources like data items are held in an exclusive manner by one transaction at a time. If Transaction A locks a resource, Transaction B cannot access it until A releases it. If both transactions are holding resources and waiting for each other’s resources, a deadlock can occur.
  2. Hold and Wait:
    This happens when a transaction is holding some resources while waiting for others. For example, Transaction A holds Resource 1 and is waiting for Resource 2, while Transaction B holds Resource 2 and is waiting for Resource 1. Neither can proceed, leading to a deadlock.
  3. No Preemption:
    Resources in a database are not forcibly taken away from a transaction. Once a transaction holds a resource, it cannot be preempted by another transaction. If a transaction does not release a resource voluntarily, deadlock can occur.
  4. Circular Wait:
    In a deadlock situation, a circular chain of transactions is formed. For instance, Transaction A is waiting for a resource held by Transaction B, B is waiting for a resource held by Transaction C, and C is waiting for a resource held by Transaction A. This circular waiting leads to deadlock.

Q16.Discuss the relative advantages and disadvantages of a distributed database.

Ans: Advantages of a Distributed Database:

  1. Improved Reliability and Availability:
    In a distributed database, data is stored across multiple locations. If one site fails, the system can still function using data from other sites. This improves the reliability and availability of the database.
  2. Scalability:
    Distributed databases can easily scale by adding more nodes or sites. This allows the system to handle increased loads and larger datasets without performance degradation.
  3. Local Control:
    Each site in a distributed database can manage its data independently. This allows for better control and customization to meet the specific needs of local users, reducing the dependency on a central site.
  4. Faster Data Access:
    Since data is distributed across multiple sites, users can access data from the nearest location. This reduces the time needed to retrieve data, improving overall performance.
  5. Flexibility:
    A distributed database can be designed to meet the specific needs of different locations. For example, certain data can be stored locally for faster access while other data is shared globally.

Disadvantages of a Distributed Database:

  1. Complexity:
    Managing a distributed database is more complex compared to a centralized one. It involves challenges like data synchronization, consistency, and dealing with network issues.
  2. Higher Costs:
    Setting up and maintaining a distributed database is usually more expensive. It requires additional hardware, software, and skilled personnel to manage the system effectively.
  3. Data Security:
    In a distributed system, data is spread across multiple sites, which can increase the risk of security breaches. Ensuring data security in such a system requires robust security measures.
  4. Data Integrity:
    Maintaining data integrity and consistency across multiple sites is challenging. There’s a risk of data conflicts, especially when multiple users update the same data simultaneously.
  5. Network Dependency:
    Distributed databases rely heavily on network communication. If the network fails or becomes slow, it can affect the performance and availability of the entire system.

Q17.Explain cascading rollback with an example.

Ans: Cascading rollback happens when the failure of one transaction causes other dependent transactions to fail and roll back as well. This occurs because transactions rely on each other's changes, so if one fails, the others may become invalid.

Example of Cascading Rollback:

  1. Transaction T1: Updates a student’s marks.
  2. Transaction T2: Reads the updated marks from T1 and uses them to calculate the student’s grade.
  3. Transaction T3: Reads the grade calculated by T2 and updates the report card.

In this situation:

This chain reaction, where the failure of T1 affects T2 and T3, is called cascading rollback.

Q18.What is Transaction ? What is interleaving in Transaction ?

Ans: A transaction in DBMS is a sequence of operations performed as a single logical unit of work. These operations could involve reading, writing, updating, or deleting data in a database. A transaction must satisfy four key properties, known as ACID properties:

Interleaving refers to the process of executing multiple transactions in a way where their operations are mixed or overlapped. This is done to maximize system efficiency and allow concurrent transactions.

For example, if two transactions, T1 and T2, are being executed, their operations may be interleaved like this:

Interleaving is common in multi-user environments where multiple transactions are handled simultaneously. The goal is to improve performance while ensuring that the outcome is as if the transactions were executed sequentially.

Q19.Describe the properties of Transaction.

Ans: Properties of Transaction (ACID Properties):

In DBMS, transactions must satisfy four essential properties known as ACID properties. These properties ensure that transactions are processed reliably and maintain the integrity of the database.

  1. Atomicity:
    Atomicity means that a transaction is treated as a single unit. Either all operations in the transaction are completed successfully, or none are applied. If any operation fails, the entire transaction is rolled back, ensuring that no partial changes are made to the database.
    Example: If a bank transaction involves transferring money from Account A to Account B, either both debit and credit operations happen, or neither happens.
  2. Consistency:
    Consistency ensures that a transaction brings the database from one valid state to another. The integrity of the data is maintained before and after the transaction. Even if a transaction fails, the database remains consistent.
    Example: In the same bank transaction, if money is deducted from Account A, it must be added to Account B. The total balance across accounts remains consistent.
  3. Isolation:
    Isolation ensures that transactions are executed independently, without interference from other transactions. Even when transactions run concurrently, the outcome should be the same as if they were executed one after the other.
    Example: If two customers are withdrawing money from the same account at the same time, each transaction should be processed without affecting the other.
  4. Durability:
    Durability guarantees that once a transaction is committed, the changes are permanent, even in case of a system failure. The committed data is saved in non-volatile storage, ensuring it is not lost.
    Example: After transferring money between accounts, even if the system crashes, the updated balances are saved and remain intact.

Q20.Explain 5th normal form with example.

Ans: The 5th Normal Form (5NF), also known as Project-Join Normal Form (PJNF), is a level of database normalization used to ensure that a database is free from redundancy and anomalies due to multiple relationships among data.

Definition:

A table is in 5NF if it is in 4NF (Fourth Normal Form) and all its join dependencies are implied by the candidate keys. This means that every table should be decomposed into smaller tables to ensure that the original data can be reconstructed using joins without redundancy or loss of information.

Key Points:

Example:

Consider a table that tracks student course enrollments and their instructors:

Student

Course

Instructor

Alice

Math

Dr. Smith

Alice

Science

Dr. Jones

Bob

Math

Dr. Smith

Bob

History

Dr. Brown

In this table:

Decomposition into 5NF:

To decompose this table into 5NF, we need to ensure that no information is lost and the data can be reconstructed by joining the decomposed tables.

  1. First Decomposition:

Student-Course Table:

Student

Course

Alice

Math

Alice

Science

Bob

Math

Bob

History

Course

Instructor

Math

Dr. Smith

Science

Dr. Jones

History

Dr. Brown

  1. Second Decomposition:

Student-Instructor Table:

Student

Instructor

Alice

Dr. Smith

Alice

Dr. Jones

Bob

Dr. Smith

Bob

Dr. Brown

Each of the decomposed tables satisfies 5NF as follows:

Q20.Explain 5th normal form with example.

Ans: Embedded SQL refers to integrating SQL commands within a host programming language such as C, Java, or COBOL. This approach allows the application to execute SQL queries and interact with a database directly from the code.

Application Example:

Consider a banking application where embedded SQL is used to manage various tasks like retrieving account details, updating balances, or inserting new transactions. The application code includes SQL commands to perform these operations, making it easier to handle database interactions within the application logic.

Static vs. Dynamic Embedded SQL:

Q21.Differentiate between 3NF and BCNF.

Ans:

Aspect

Third Normal Form (3NF)

Boyce-Codd Normal Form (BCNF)

Definition

A table is in 3NF if it is in 2NF and all non-key attributes are non-transitively dependent on the primary key.

A table is in BCNF if it is in 3NF and for every non-trivial functional dependency, the left-hand side is a superkey.

Focus

Removes transitive dependencies, where non-key attributes depend on other non-key attributes.

Ensures that every functional dependency is enforced by a superkey, which can handle more complex dependency scenarios.

Requirement

3NF deals with non-key attributes and ensures they depend only on the primary key.

BCNF handles all functional dependencies, including those where a non-superkey attribute determines another attribute.

Example

In a table with attributes (StudentID, Course, Instructor), where Instructor depends on Course, but not on StudentID, it must be in 3NF.

If a table with attributes (Course, Instructor, Room) has dependencies like Course → Instructor and Instructor → Room, it might not be in BCNF if Instructor is not a superkey.

Handling Dependencies

Handles transitive dependencies by ensuring non-key attributes do not depend on other non-key attributes.

Handles all functional dependencies by ensuring every determinant is a superkey.

Normalization Goal

Aims to eliminate transitive dependency and improve data integrity.

Aims to eliminate redundancy related to functional dependencies by enforcing more rigorous constraints.

Q22.What is View Serializability ?

Ans: A schedule is view serializable if it can be transformed into a serial schedule by reordering transactions, while preserving the final state of the database. This means the outcome of the concurrent execution matches the result of some serial execution.

Example:

Suppose we have two transactions:

In one schedule, T1 might complete all its operations before T2 starts. In another schedule, T1 and T2’s operations might be interleaved. If the final result of the interleaved schedule is the same as if T1 and T2 had run serially, the schedule is view serializable.

Q23.Explain non-recoverable schedule with an example.

Ans:A non-recoverable schedule is a sequence of transactions where, if a transaction that has modified the database is rolled back, the database cannot be returned to a consistent state without affecting other transactions that have already committed. This means that the effects of a committed transaction might depend on the results of uncommitted transactions, leading to inconsistencies if those uncommitted transactions are later rolled back.

Example:

Consider two transactions, T1 and T2, with the following operations:

  1. Read(A)
  2. Write(A)
  1. Read(A)
  2. Write(A)

Assume the following sequence of operations:

  1. T1 reads value A.
  2. T2 reads value A (which is the same value T1 read before T1 writes to A).
  3. T1 writes a new value to A.
  4. T2 writes a new value to A.
  5. T1 commits.
  6. T2 rolls back.

In this scenario:

Because T1 committed based on the value of A that T2 had read, but T2 never committed its changes, the database might end up in an inconsistent state. Specifically, if T1 committed based on the assumption that T2 would also commit, rolling back T2 means T1's changes were made based on uncommitted data, which now gets undone.

Q24.What is the importance of a trigger ?

Ans:Importance of a Trigger:

A trigger is a special type of stored procedure in a database that automatically executes or fires in response to specific events, such as insertions, updates, or deletions on a table. Triggers are crucial in database management for several reasons:

  1. Automatic Enforcement of Business Rules: Triggers can enforce complex business rules automatically. For example, a trigger can ensure that when a new employee is added, their salary adheres to company policies.
  2. Data Validation: Triggers can validate data before it's committed to the database. For example, a trigger can prevent the insertion of a negative value in a salary field.
  3. Auditing and Logging: Triggers can maintain logs of changes made to data. For example, a trigger can record details of updates made to a table, capturing who made the change and when.
  4. Maintaining Referential Integrity: Triggers help maintain data consistency and integrity. For example, a trigger can automatically update or delete related records in other tables when a record in the primary table is updated or deleted.
  5. Automating Actions: Triggers can automate tasks that need to occur as a result of certain events. For example, a trigger can automatically send a notification email when a new order is placed.

Example:

Consider an e-commerce system with a Products table and an Inventory table. If a product's quantity is updated, you might use a trigger to automatically adjust related records or notify the inventory manager.

Q25.What is the role of commit ? Explain with examples.

Ans: In a database management system, the COMMIT statement is used to finalize a transaction. It ensures that all changes made during the transaction are saved permanently in the database.

Key Functions:

  1. Finalization of Changes: The COMMIT statement makes all changes made during a transaction permanent. Once committed, these changes cannot be undone.
  2. Consistency: By executing a COMMIT, the database is updated consistently, meaning that all operations within the transaction are applied together.
  3. Visibility: After committing, changes become visible to other transactions. This allows other users or processes to see the updated data.

Examples:

In a banking system, consider a transaction to transfer money between two accounts:

If the COMMIT is executed, the changes are saved, and the transfer is complete. If the transaction fails or is not committed, no changes are made, and the database remains unchanged.

In an online shopping system:

When the COMMIT is executed, all these operations are saved, and the order is processed. If the COMMIT is not executed, the transaction can be rolled back, and no changes will be saved.

Q26.Distinguish between Live Lock and Dead Lock

Ans:

Aspect

Live Lock

Dead Lock

Definition

A situation where two or more transactions continuously change states in response to each other but never progress to completion.

A situation where two or more transactions are stuck, each waiting for resources held by the other, causing a standstill.

Symptoms

Transactions are active but continually changing states without making progress.

Transactions are stuck and cannot proceed because they are waiting for each other to release resources.

Resource Utilization

Resources are not utilized effectively as transactions are in a loop trying to avoid conflict.

Resources are held by transactions but not being utilized effectively as they are waiting indefinitely.

Resolution

Often resolved by introducing random delays or using backoff strategies to break the cycle.

Typically resolved by detecting the deadlock and then aborting one or more transactions to break the cycle.

Example

Two transactions repeatedly try to acquire locks on resources held by the other but back off each time, leading to continuous retrying.

Transaction A holds a lock on Resource 1 and waits for Resource 2 held by Transaction B, while Transaction B waits for Resource 1 held by Transaction A.

Impact on System

May lead to decreased system performance due to constant retrying and context switching.

Results in a complete halt of the involved transactions, potentially leading to a system-wide impact if not resolved.

Q27Consider the following relational scheme :
Employee ( ename, street, location ) Works ( ename, cname, sal ) Company ( cname, location ) Managers ( ename, mgr_name )
Express the following in SQL :
i) Find the name and location of residence of all employees who work for 'CTS'.
ii) Find the name, street and location of all employees who work for 'TCS' and earn more than Rs. 20,000.
iii) Find the name of all employees who live in the same city as the company for which they work.
iv) Find the names of all employees who live in the same location and in the street as do their managers.
v) Assume the companies may be located in several cities. Find all companies name located in every location in which 'IBM' is located.
vi) Find the name of all employees who earn more than every employee of 'TCS' company

Ans: Here are the SQL queries for each of the requirements based on the given relational schema:

Schema:

SQL Queries:

i) Find the name and location of residence of all employees who work for 'CTS':

sql

SELECT E.ename, E.location

FROM Employee E

JOIN Works W ON E.ename = W.ename

WHERE W.cname = 'CTS';

ii) Find the name, street, and location of all employees who work for 'TCS' and earn more than Rs. 20,000:

sql

SELECT E.ename, E.street, E.location

FROM Employee E

JOIN Works W ON E.ename = W.ename

WHERE W.cname = 'TCS' AND W.sal > 20000;

iii) Find the name of all employees who live in the same city as the company for which they work:

sql

SELECT E.ename

FROM Employee E

JOIN Works W ON E.ename = W.ename

JOIN Company C ON W.cname = C.cname

WHERE E.location = C.location;

iv) Find the names of all employees who live in the same location and in the street as do their managers:

sql

SELECT E.ename

FROM Employee E

JOIN Managers M ON E.ename = M.ename

JOIN Employee EM ON M.mgr_name = EM.ename

WHERE E.location = EM.location AND E.street = EM.street;

v) Assume the companies may be located in several cities. Find all company names located in every location in which 'IBM' is located:

sql

SELECT C1.cname

FROM Company C1

WHERE NOT EXISTS (

SELECT 1

FROM Company C2

WHERE C2.cname = 'IBM' AND NOT EXISTS (

SELECT 1

FROM Company C3

WHERE C3.location = C2.location AND C3.cname = C1.cname

)

);

vi) Find the name of all employees who earn more than every employee of 'TCS':

sql

SELECT E1.ename

FROM Employee E1

JOIN Works W1 ON E1.ename = W1.ename

WHERE W1.sal > ALL (

SELECT W2.sal

FROM Works W2

WHERE W2.cname = 'TCS'

);

Explanation:

4o mini

Q28.What is the critical section ?

Ans: In computing, a critical section refers to a part of a program or process where shared resources (such as variables, files, or devices) are accessed or modified. To ensure the integrity of these resources and prevent conflicts or corruption, the critical section must be executed in such a way that only one process or thread can access it at a time.

Key Points:

  1. Exclusive Access: The critical section must be executed by only one process or thread at a time to prevent simultaneous access that could lead to inconsistencies or data corruption.
  2. Mutual Exclusion: This principle ensures that no two processes or threads can be in their critical sections at the same time. Mechanisms like locks, semaphores, or mutexes are used to enforce mutual exclusion.
  3. Synchronization: Proper synchronization techniques are needed to manage access to the critical section. These techniques prevent race conditions, where the outcome depends on the unpredictable sequence of access to the critical section.

Q29.What is schedule ?

Ans: In database management, a schedule refers to the sequence of operations (such as read and write actions) performed by multiple transactions. It outlines the order in which these operations are executed, and it is used to ensure that transactions are carried out in a way that maintains the database's consistency and integrity.

Key Points:

  1. Operations: A schedule includes various operations performed by transactions, such as reading from or writing to the database.
  2. Transaction Sequence: It represents the order in which transactions are executed. This can affect the final outcome of the database operations.
  3. Types of Schedules:
    Serial Schedule: Transactions are executed sequentially, one after another, without any overlap. The results are the same as if the transactions were executed in a single sequence.
    Concurrent Schedule: Transactions are executed concurrently (i.e., their operations are interleaved). It aims to improve performance but needs to ensure consistency through proper concurrency control mechanisms.
  4. Serializability: A schedule is considered serializable if its outcome is equivalent to some serial schedule. This means that even though transactions are executed concurrently, their final result is consistent with some serial order of execution.

Q30.What are semaphores ?

Ans: Semaphores are synchronization tools used in computing to manage access to shared resources by multiple processes or threads. They help ensure that resources are used safely and efficiently, especially in concurrent systems where multiple processes or threads might need to access the same resource simultaneously.

Key Points:

  1. Definition: A semaphore is a synchronization primitive that controls access to a common resource by multiple processes or threads. It helps to prevent conflicts and ensures orderly access.
  2. Types of Semaphores:
  1. Operations:
  1. Usage: Semaphores are used to:

Q31.What is two-phase locking ? Discuss it.

Ans: Two-Phase Locking (2PL) is a concurrency control protocol used in database systems to ensure that transactions are executed in a way that maintains consistency and avoids conflicts like deadlocks or data inconsistency. The main objective of 2PL is to ensure that a schedule is serializable, meaning it produces the same results as if transactions were executed one after another (serially).

Phases of Two-Phase Locking:

  1. Growing Phase:
  1. Shrinking Phase:

A transaction must follow these two phases strictly. The growing phase happens first, followed by the shrinking phase. This ensures that all required locks are obtained before any are released.

How 2PL Works: