Materialized view or Indexed view

A materialized view or indexed view is a db object that stores the results of a query physically. it may be a local copy of data located remotely, or may be a subset of the rows and/or columns of a table or join result, or may be a summary based on aggregations of a table’s data. Materialized views, which store data based on remote tables, are also known as snapshots. A snapshot can be redefined as a materialized view.
In dbms a view is a virtual table representing the result of a database query. Whenever a query or an update addresses an ordinary view’s virtual table, the DBMS converts these into queries or updates against the underlying base tables. A materialized view takes a different approach in which the query result is cached as a concrete table that may be updated from the original base tables from time to time. This enables much more efficient access, at the cost of some data being potentially out-of-date. It is most useful in data warehousing scenarios, where frequent queries of the actual base tables can be extremely expensive.

As the materialized view is like a real table, anything that can be implemented on a db table can be implemented on it.  Creating indexes on any column, resulting improvements in query performance time.
Posted in Database Concepts | Leave a comment

Database Views

A view consists of a query accessible as a virtual table in a relational database. Unlike normal tables (base tables) in a relational database, a view is not form part of the physical schema. Changing the data in a table alters the data shown in subsequent results of the view.  
Advantages of views over tables:
  • Views can represent a subset of the data contained in a table
  • Views can join and simplify multiple tables into a single virtual table
  • Views can act as aggregated tables, where the database engine aggregates data (sum, average etc.) and presents the calculated results as part of the data
  • Views take very little space to store because the database contains only the definition of a view, not a copy of all the data it presents
  • Views can limit the degree of exposure of a table or tables to the outer world
Just as functions (in programming) can provide abstraction, so database users can create abstraction by using views. In another parallel with functions, database users can manipulate nested views, thus one view can aggregate data from other views.
Just as rows in a base table lack any defined ordering, rows available through a view do not appear with any default sorting. A view is a relational table, and the relational model defines a table as a set of rows. Since sets are not ordered – by definition – the rows in a view are not ordered, either. Therefore, an ORDER BY clause in the view definition is meaningless. However, sorted data can be obtained from a view, in the same way as any other table – as part of a query statement.
Posted in Database Concepts | Leave a comment

Database Lock

A lock is used when multiple users need to access a database concurrently. This prevents data from being corrupted or invalidated when multiple users try to update to the database. Any user can only modify those database records (that is, items in the database) to which they have applied a lock that gives them exclusive access to the record until the lock is released. There are two types for locking data techniques in a database:

1.      Pessimistic locking
2.      Optimistic locking  

In pessimistic locking a record or page is locked immediately when the lock is requested, while in an optimistic lock the record or page is only locked when the changes made to that record are updated. The latter situation is only appropriate when there is less chance of someone needing to access the record while it is locked; otherwise it cannot be certain that the update will succeed because the attempt to update the record will fail if another user updates the record first. With pessimistic locking it is guaranteed that the record will be updated. Locking can be controlled by isolation level.

Posted in Database Concepts | Leave a comment

Normalization Denormalization

Database normalization is the process of organizing the fields and tables of a relational database to minimize redundancy and dependency. Normalization usually involves dividing large tables into smaller tables and relationships between them. The main goal is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.
Informally, a relational database table is often described as “normalized” if it is in the Third Normal Form. Most 3NF tables are free of insertion, update, and deletion anomalies.
Databases intended for online transaction processing (OLTP) are typically more normalized than databases intended for online analytical processing (OLAP).
OLTP applications are characterized by a high volume of small transactions such as updating a sales record at a supermarket checkout counter. The expectation is that each transaction will leave the database in a consistent state. By contrast, databases intended for OLAP operations are primarily “read only” databases. OLAP applications tend to extract historical data that has accumulated over a long period of time.
For such databases, redundant or “denormalized” data may facilitate business intelligence applications. Specifically, dimensional tables in a star schema often contain denormalized data. The denormalized or redundant data must be carefully controlled during extract, transform, load (ETL) processing, and users should not be permitted to see the data until it is in a consistent state. The normalized alternative to the star schema is the snowflake schema. In many cases, the need for denormalization has waned as computers and RDBMS software have become more powerful, but since data volumes have generally increased along with hardware and software performance, OLAP databases often still use denormalized schemas.
Denormalization is also used to improve performance on smaller computers as in computerized cash-registers and mobile devices, since these may use the data for look-up only (e.g. price lookups). Denormalization may also be used when no RDBMS exists for a platform (such as Palm), or no changes are to be made to the data and a swift response is crucial
Posted in Database Concepts | Leave a comment

Data Modeling

Data modeling is a process used to define data requirements needed to support the business within the scope of corresponding information systems inside the organizations.
There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system. The data requirements are initially recorded as a conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders.
The conceptual model is then translated into a logical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models. The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details. Data modeling defines not just data elements, but also their structures and the relationships between them.
Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource. The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization, e.g., using data modeling.
Posted in Database Concepts | Leave a comment

Primary Foreign and Unique Keys

In a relational database , a “Primary Key” is a key that uniquely defines the characteristics of each row .The primary key has to consist of characteristics that cannot be duplicated by any other row. The primary key may consist of a single attribute or a multiple attributes in combination. For example, a birthday could be shared by many people and so would not be a prime candidate for the Primary Key, but a social security number or Driver’s License number would be ideal since it correlates to one single data value. Another unique characteristic of a Primary Key as it pertains to a relational database, is that a Primary Key must also serve as a Foreign Key on a related table[. For example:

Author TABLE Schema:

AuthorTable(AUTHOR_ID,AuthorName,CountryBorn,YearBorn)
Book TABLE Schema:
Book TABLE(ISBN,Author_ID,Title,Publisher,Price)
Here we can see that AUTHOR_ID serves as the Primary Key in AuthorTable but also serves as the Foreign Key on the BookTable. The Foreign Key serves as the link and therefore the connection between the two “related” tables in this sample database.
In a relational database, a unique key index can uniquely identify each row of data values in a database table. A unique key index comprises a single columns or a set of columns in a single database table. No two distinct rows or data records in a database table can have the same data value (or combination of data values) in those unique key index columns if NULL values are not used. Depending on its design, a database table may have many unique key indexes but at most one primary key index.
A unique key constraint does not imply the NOT NULL constraint in practice. Because NULL is not an actual value (it represents the lack of a value), when two rows are compared, and both rows have NULL in a column, the column values are not considered to be equal. Thus, in order for a unique key to uniquely identify each row in a table, NULL values must not be used. According to the SQL standard and Relational Model theory, a unique key (unique constraint) should accept NULL in several rows/tuples — however not all RDBMS implement this feature correctly.
A unique key should uniquely identify all possible rows that exist in a table and not only the currently existing rows. Examples of unique keys are Social Security numbers (associated with a specific person) or ISBNs (associated with a specific book). Telephone books and dictionaries cannot use names, words, or Dewey Decimal system numbers as candidate keys because they do not uniquely identify telephone numbers or words.
A table can have at most one primary key, but more than one unique key. A primary key is a combination of columns which uniquely specify a row. It is a special case of unique keys. One difference is that primary keys have an implicit NOT NULL constraint while unique keys do not. Thus, the values in unique key columns may or may not be NULL, and in fact such a column may contain at most one NULL field. Another difference is that primary keys must be defined using another syntax

Posted in Database Concepts | Leave a comment

Deployment issue’s in SSAS cube

After configuring and installing SQL Server when we try deploy our first sample cube it throws error regarding the access to data source.
 The error message says that the connection to data source is not valid or data source is not able to connect database. These all error occurs as the account with which data source trying to access the database has not appropriate permissions.
Solution:
To resolve the issue go to the database engine, expand logins tab and check the access to your account and provide sys.admin right to your account. If user doesn’t exist in logins create new login then assign it the appropriate rights.
Posted in SQL Server Analysis Services | Leave a comment

Difference between Varchar and Nvarchar

As we all know the basic difference between varchar and Nvarchar is that one holds non Unicode and other holds Unicode data. But we are just aware theoretically and don’t know how it works actually.
Here I am showing an example of how varchar and nvarchar data types works in query analyzer
Declare @sampleVar varchar(10)
Declare @sampleNVar nvarchar(10)
set @sampleVar = ‘VarSamp Ж’
set @sampleNVar = ‘NVarSamp Ж’
Select @sampleNVar, @sampleVar
set @sampleNVar = N’NVarSamp Ж’
set @sampleVar = N’VarSamp Ж’
Select @sampleNVar, @sampleVar
If you will see we have added the suffix N while setting the Unicode data type and its display’s the results otherwise it show’s the same result as varchar data type. By seeing this example we can say in Unicode data types can be handled through nvarchar so while developing bilingual application or if there is any possibility to use current application in different languages we should use Nvarchar data types.
Keywords: How to read nvarchar data in SQL Query Analyzer, SQL Server Data types, SQL Server Questions, Varchar ,Nvarchar.
Posted in Uncategorized | Leave a comment

Control Flow Task in SSIS

Control flow task is the starting point of package design. It’s the first step or window which developer opens while designing the package. Package control flow task contains the package workflow where entire flow of multiple tasks designed.

 

The Control Flow tab in the Package Designer Window of BIDS is used to design workflow for the tasks that constitute the package. In control flow the process flows of multiple tasks. The containers like for loop, Sequential and for each loop are used to design the process. Data flow task is a sub task of control flow task where data can be transferred between two data servers. Tasks are added to the control flow via the Toolbox. Double-clicking the task in the Control Flow window will open the Task Editor, which is used to configure the Task.

Control is compulsory part of package design.

 

Posted in Uncategorized | Leave a comment

Deploying SSIS Package

For Deploying SSIS package Deployment utilities need to be created. For creating deployment utilities
1.      Go to solution explorer.
2.      Right click on SSIS project.
3.      Open properties window.
4.      Set create deployment utility to true.
5.      Right click on project and click deploy the project.
 
This process will create the manifest utility in Bin folder of SSIS project at the location where project is save.
This manifest utility is wizard base deployment utility which asked for both SQL Server Deployment and Flat file Deployment.
 
SQL Server Deployment: in this mechanism package gets deployed in Integration Services server in SQL Server.
In this package deploy in MSDB database in integration services server. This MSDB database all the deployed package on particular instance of SQL Server. You can run package directly right clicking on it from MSDB database or can schedule a database Job also.
 
File system deployment: In the file system deployment package deploys at particular location on hard drive which can be later run by double clicking on it or using SQL Server database job.


Below Video will tell the process to deploy SSIS package.


 
Posted in SSIS Study Guide | Leave a comment