What’s the difference between clustered and non-clustered indexes in SQL Server?

Today, I got a question from my co-worker about the index of the SQL Server.

It’s good question and easy!! 

In SQL Server, indexes are organized as B-trees. Each page in an index B-tree is called an index node. The top node of the B-tree is called the root node. The bottom level of nodes in the index is called the leaf nodes. Any index levels between the root and the leaf nodes are collectively known as intermediate levels. In a clustered index, the leaf nodes contain the data pages of the underlying table. The root and intermediate level nodes contain index pages holding index rows. Each index row contains a key value and a pointer to either an intermediate level page in the B-tree, or a data row in the leaf level of the index. The pages in each level of the index are linked in a doubly-linked list.

Nonclustered indexes have the same B-tree structure as clustered indexes, except for the following significant differences:

  • The data rows of the underlying table are not sorted and stored in order based on their nonclustered keys.
  • The leaf layer of a nonclustered index is made up of index pages instead of data pages.

Doubly LinkedList Sample

using System;

namespace List
{
public class DoublyLinkedList
{
public const int EQ   =  0;

private Node head     = null;
private Node tail     = null;
class Node: IComparable
{
public Node next;
public Node prev;
public object value;
public Node(object value)
{
this.value = value;
}

public int CompareTo(object value)
{
Type t  = this.value.GetType();
if(t != value.GetType())
throw(new ArgumentException());
if(t.ToString().Equals(“System.String”))
return this.value.ToString().CompareTo(value);
else
{
if((int)this.value == (int)value)
return 0;
else if((int)this.value < (int)value)
return -1;
else
return 1;
}
}
}

public void AddNodeToHead(object value)
{
Node node = new Node(value);
if(head == null)
{
head = node;
tail = node;
}
else
{
node.next = head;
head.prev = node;
head = node;
}
}

public void AddNodeToTail(object value)
{
Node node = new Node(value);
if(tail == null)
{
head = node;
tail = node;
}
else
{
node.prev = tail;
tail.next = node;
tail = node;
}
}

public void RemoveNodeFromHead()
{
if(head == null)
return ;
if(head == tail)
{
head = null;
tail = null;
}
else
{
head.next.prev = null;
head = head.next;
}
}

public void RemoveNodeFromTail()
{
if(tail == null)
return ;
if(head == tail)
{
head = null;
tail = null;
}
else
{
tail.prev.next = null;
tail = tail.prev;
}
}

public void PrintListFromHead()
{
Node node = head;
while( node != null )
{
if(node.value.GetType().ToString()==”System.String”)
Console.Write(“‘{0}’ “,node.value);
else
Console.Write(“{0} “,node.value);
node = node.next;
}
Console.WriteLine();
}

public void PrintListFromTail()
{
Node node = tail;
while( node != null )
{
if(node.value.GetType().ToString()==”System.String”)
Console.Write(“‘{0}’ “,node.value);
else
Console.Write(“{0} “,node.value);
node = node.prev;
}
Console.WriteLine();

}

public void AccessNodeOfList()
{
Node node = head;
Console.Write(“‘{0}’ “,node.value);
Console.Write(“‘{0}’ “,node.next.value);
Console.Write(“‘{0}’ “,node.next.next.value);
Console.Write(“‘{0}’ “,node.next.next.prev.value);
Console.Write(“‘{0}’\n”,node.next.next.prev.prev.value);
}

public object this[int index]
{
get
{
Node node = head;
while(index > 0)
{
node = node.next;
index–;
}
return node.value;
}
}

public int SearchNode(object value)
{
int index  = -1;
if(head == null)
return index;
Node node = head;
while(node != null)
{
index++;
try
{
if(node.CompareTo(value) == EQ)
{
return index;
}
}
catch { }
node = node.next;
}
return -1;
}
}

public class DoublyLinedListMain
{
static void Main(string[] args)
{

DoublyLinkedList list = new DoublyLinkedList();

Console.WriteLine(“Add ‘BIOS’ to Tail of List”);
list.AddNodeToTail(“BIOS”);
Console.WriteLine(“Add ‘ADSL’ to Head of List”);
list.AddNodeToHead(“ADSL”);
Console.WriteLine(“Add ‘CATV’ to Tail of List”);
list.AddNodeToTail(“CATV”);
Console.WriteLine(“Add  100   to Head of List”);
list.AddNodeToHead(100);
Console.WriteLine(“Add  1000  to Tail of List”);
list.AddNodeToTail(1000);

Console.WriteLine(“Print List From Head”);
list.PrintListFromHead();
int index = list.SearchNode(“CATV”);
if(index >= 0)
Console.WriteLine(“‘{0}’ is found.”,list[index]);

index = list.SearchNode(1000);
if(index >= 0)
Console.WriteLine(“{0} is found.”,list[index]);

}
}
}

Testing question (No.7)

Q: Why are there so many software bugs?
A: Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.

  • There are unclear software requirements because there is miscommunication as to what the software should or shouldn’t do.
  • Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.
  • Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
  • As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.

Q: Do automated testing tools make testing easier?
A: Yes and no.
For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile.
A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret.
If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change.
One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts.
Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.

Q: What makes a good test engineer?
A: Good test engineers have a “test to break” attitude. We, good test engineers, take the point of view of the customer, have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process, gives the test engineer an appreciation for the developers’ point of view and reduces the learning curve in automated test tool programming.

Q: What is a test plan?
A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it.

Q: What is a test case?
A: A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a…

· Test case identifier;

· Test case name;

· Objective;

· Test conditions/setup;

· Input data requirements/steps, and

· Expected results.

Q: How do you create a test plan/design?
A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking…

· Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.

· Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.

· It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.

· Test scenarios are executed through the use of test procedures or scripts.

· Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.

· Test procedures or scripts include the specific data that will be used for testing the process or transaction.

· Test procedures or scripts may cover multiple test scenarios.

· Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.

Q: How do you create a test plan/design? (Cont’d…)

  • Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.
  • Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.
  • A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.

Inputs for this process:

  • Approved Test Strategy Document.
  • Test tools, or automated test tools, if applicable.
  • Previously developed scripts, if applicable.
  • Test documentation problems uncovered as a result of testing.
  • A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code, and software complexity data.

Testing question (No.6)

Q: What is a test scenario?
A: The terms “test scenario” and “test case” are often used synonymously.
Test scenarios are test cases, or test scripts, and the sequence in which they are to be executed.
Test scenarios are test cases that ensure that business process flows are tested from end to end.
Test scenarios are independent tests, or a series of tests, that follow each other, where each of them dependent upon the output of the previous one.
Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures.
Test scenarios are designed to represent both typical and unusual situations that may occur in the application.
Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios.
It is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing.
Test scenarios are executed through the use of test procedures or scripts.
Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
Test procedures or scripts may cover multiple test scenarios.

Q: What is verification?
A: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walk-throughs and inspection meetings. You CAN learn to do verification, with little or no outside help. Get CAN get free information. Click on a link!

Q: What is validation?
A: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.

Q: What is a walk-through?
A: A walk-through is an informal meeting for evaluation or informational purposes. A walk-through is also a process at an abstract level. It’s the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). The purpose of code walk-throughs is to ensure the code fits the purpose.
Walk-throughs also offer opportunities to assess an individual’s or team’s competency.

Q: What is good code?
A: A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.

Q: What is good design?
A: Design could mean to many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.

Q: What is software life cycle?
A: Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out.

Testing question (No.5)

Q: How do you check the security of your application?
A: To check the security of an application, we can use security/penetration testing. Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage.
This type of testing usually requires sophisticated testing techniques.

Q: When testing the password field, what is your focus?
A: When testing the password field, one needs to verify that passwords are encrypted.

Q: What is the objective of regression testing?
A: The objective of regression testing is to test that the fixes have not created any other problems elsewhere. In other words, the objective is to ensure the software has remained intact.
A baseline set of data and scripts are maintained and executed, to verify that changes introduced during the release have not "undone" any previous code.
Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Q: What is the difference between software bug and software defect?
A: A ‘software bug’ is a nonspecific term that means an inexplicable defect, error, flaw, mistake, failure, fault, or unwanted behavior of a computer program.
Other terms, e.g. software defect and software failure, are more specific.
There are many who believe the word ‘bug’ is a reference to insects that caused malfunctions in early electromechanical computers (in the 1950s and 1960s), the truth is the word ‘bug’ has been part of engineering jargon for 100+ years. Thomas Edison, the great inventor, wrote the followings in 1878: "It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise—this thing gives out and [it is] then that "Bugs" — as such little faults and difficulties are called — show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached."

Q: What is the definition of top down design?
A: Top down design progresses from simple design to detailed design. Top down design solves problems by breaking them down into smaller, easier to solve subproblems. Top down design creates solutions to these smaller problems, and then tests them using test drivers.
In other words, top down design starts the design process with the main module or system, then progresses down to lower level modules and subsystems.
To put it differently, top down design looks at the whole system, and then explodes it into subsystems, or smaller parts. A systems engineer or systems analyst determines what the top level objectives are, and how they can be met. He then divides the system into subsystems, i.e. breaks the whole system into logical, manageable-size modules, and deals with them individually.

Q: What is the difference between top down and bottom up design?
A: Top down design proceeds from the abstract (entity) to get to the concrete (design). Bottom up design proceeds from the concrete (design) to get to the abstract (entity).
Top down design is most often used in designing brand new systems, while bottom up design is sometimes used when one is reverse engineering a design; i.e. when one is trying to figure out what somebody else designed in an existing system.
Bottom up design begins the design with the lowest level modules or subsystems, and progresses upward to the main program, module, or subsystem.
With bottom up design, a structure chart is necessary to determine the order of execution, and the development of drivers is necessary to complete the bottom up approach.
Top down design, on the other hand, begins the design with the main or top-level module, and progresses downward to the lowest level modules or subsystems.
Real life sometimes is a combination of top down design and bottom up design.
For instance, data modeling sessions tend to be iterative, bouncing back and forth between top down and bottom up modes, as the need arises.

Q: What is the definition of bottom up design?
A: Bottom up design begins the design at the lowest level modules or subsystems, and progresses upward to the design of the main program, main module, or main subsystem.
To determine the order of execution, a structure chart is needed, and, to complete the bottom up design, the development of drivers is needed.
In software design – assuming that the data you start with is a pretty good model of what you’re trying to do – bottom up design generally starts with the known data (e.g. customer lists, order forms), then the data is broken into into chunks (i.e. entities) appropriate for planning a relational database.
This process reveals what relationships the entities have, and what the entities’ attributes are.
In software design, bottom up design doesn’t only mean writing the program in a different order, but there is more to it. When you design bottom up, you often end up with a different program. Instead of a single, monolithic program, you get a larger language, with more abstract operators, and a smaller program written in it.
Once you abstract out the parts which are merely utilities, what is left is much shorter program. The higher you build up the language, the less distance you will have to travel down to it, from the top. Bottom up design makes it easy to reuse code blocks.
For example, many of the utilities you write for one program are also useful for programs you have to write later. Bottom up design also makes programs easier to read.

Q: What is the purpose of a test plan?
A: Reason number 1: We create a test plan because preparing it helps us to think through the efforts needed to validate the acceptability of a software product.
Reason number 2: We create a test plan because it can and will help people outside the test group to understand the why and how of product validation.
Reason number 3: We create a test plan because, in regulated environments, we have to have a written test plan.
Reason number 4: We create a test plan because the general testing process includes the creation of a test plan.
Reason number 5: We create a test plan because we want a document that describes the objectives, scope, approach and focus of the software testing effort.
Reason number 6: We create a test plan because it includes test cases, conditions, the test environment, a list of related tasks, pass/fail criteria, and risk assessment.
Reason number 7: We create test plan because one of the outputs for creating a test strategy is an approved and signed off test plan document.
Reason number 8: We create a test plan because the software testing methodology a three step process, and one of the steps is the creation of a test plan.
Reason number 9: We create a test plan because we want an opportunity to review the test plan with the project team.
Reason number 10: We create a test plan document because test plans should be documented, so that they are repeatable.

Q: What is the difference between a test plan and a test scenario?
A: Difference number 1: A test plan is a document that describes the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a document that describes both typical and atypical situations that may occur in the use of an application.
Difference number 2: Test plans define the scope, approach, resources, and schedule of the intended testing activities, while test procedures define test conditions, data to be used for testing, and expected results, including database updates, file outputs, and report results.
Difference number 3: A test plan is a description of the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a description of test cases that ensure that a business process flow, applicable to the customer, is tested from end to end.

Testing question (No.4)

Q: What is the difference between verification and validation?
A: Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself.
The inputs of verification are checklists, issues lists, walk-throughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product.
The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product.

Q: What is up time?
A: "Up time" is the time period when a system is operational and in service. Up time is the sum of busy time and idle time.
For example, if, out of 168 hours, a system has been busy for 50 hours, idle for 110 hours, and down for 8 hours, then the busy time is 50 hours, idle time is 110 hours, and up time is (110 + 50 =) 160 hours.

Q: What is user friendly software?
A: A computer program is user friendly, when it is designed with ease of use, as one of the primary objectives of its design.

Q: What is a virtual address?
A: In virtual storage systems, virtual addresses are assigned to auxiliary storage locations. They allow those location to be accessed as though they were part of the main storage.

Q: What is virtual memory?
A: Virtual memory relates to virtual storage. In virtual storage, portions of a user’s program and data are placed in auxiliary storage, and the operating system automatically swaps them in and out of main storage as needed.

Q: What is virtual storage?
A: Virtual storage is a storage allocation technique, in which auxiliary storage can be addressed as though it was part of main storage. Portions of a user’s program and data are placed in auxiliary storage, and the operating system automatically swaps them in and out of main storage as needed.

Q: What is disaster recovery testing?
A: Disaster recovery testing is testing how well the system recovers from disasters, crashes, hardware failures, or other catastrophic problems.