Friday, December 22, 2006

Database Row Level Security - Part 2 - Oracle

In the first episode we briefly introduced Row Level Security. In this part I’m going to show you how to implement it in an Oracle 10g database.

Oracle 8i introduced a feature called VPD (Virtual Private Database); also known as Fine Grained Access Control that provides powerful row-level security capabilities.
VPD works by modifying the SQL commands to present a partial view of data to the users based on a set of predefined criteria. During runtime, WHERE clauses are appended to all the queries to filter only those rows user is supposed to see.

Important: implementing row level security using VPD requires each user to have an actual corresponding user object in database, not just a record in a table. If that's not feasible, then part 3 might be the way to go.

Here is the list of what we need to implement this:
  1. An Application Context.
  2. A procedure that sets a variable in the above mentioned context and is called when user login to the database.
  3. A secured procedure that makes the WHERE clause using the variable that has been set in the context.
  4. A RLS POLICY that put all these together to tell database how to filter queries.
I explain it with the doctors and patients example in part 1.
Let’s assume the tables look something like the following picture (the relationship could be actually many-to-many, but I simplified it).
 Every time a doctor logs in to the system, a procedure is invoked which then sets the value of a variable in Application Context called, say, "logged_in_doc_id" to doctor's doc_id field queried from the Doctors table.

When a doctor queries the list of patients, another procedure filters the data. In this case, we simply add a WHERE clause to the query. Something like this: WHERE f_doc_id = logged_in_doc_id. This WHERE clause will be concatenated to the SELECT command and make it look like this (assuming his id in database is 1): SELECT * FROM PATIENTS WHERE f_doc_id = 1.

Here is the code:

1 & 2 –
As it can be seen, I’ve named the Application Context "NMSPC" and I’ve set the "logged_in_doc_id" variable to the value of "doc_id" field of the record that its "doc_name" field matches with the current logged in user’s name (USER). 


      t_id NUMBER(5);
      SELECT doc_id INTO t_id
      WHERE UPPER(doc_name) = UPPER(USER);
           ('NMSPC', 'logged_in_doc_id', t_id);


EXEC_CONTEXT_SETUP trigger is called whenever a user (in this instance "Jim") logs in.

3 – We need a procedure that creates the predicate; the WHERE clause:

                        obj_name VARCHAR2)

                        obj_name VARCHAR2)
     IF UPPER(obj_name) = 'PATIENTS'
      RETURN 'f_doc_id = SYS_CONTEXT("NMSPC",
SYS_CONTEXT is a function that helps us to retrieve the value of a variable in a particular context.

4 – The last thing we are going to create is a RLS POLICY that put all these together to make them meaningful.

   ('ALL_TABLES', 'Patients',
Please note that "ALL_TABLES" is our schema under which all tables are created and all users have access to.

Sunday, December 03, 2006

Database Row Level Security - Part 1

In this 3 parts series I intend to explain what Row Level Security in database management is and how to implement it in Oracle and SQL Server.

In almost all database management systems the level at which information is controlled extends only to a certain level of granularity. In some scenarios, however, there is a requirement to control access at a more granular level. A list of patients, for example, may be stored in a single table. Any doctor, however, is only be permitted to view his own patients' records. Another example can be those applications that implement feature access control. That is, users' credentials and their access to application's features and modules are stored in database and used by application.
In such cases, merely issuing a GRANT/DENY SELECT command on the table will not meet the requirement. One solution is to implement this requirement at the application level - in Business Logic Layer if application is using 3 tier architecture - to filter rows of data. But what if someone bypasses the code and gain direct access to data? The presentation layer creates the illusion of security. What actually needs to be done is to automatically filter queries at the database level. In the doctors and patients example, all users might have access to the patient table, but SELECT * FROM Patient only returns the rows that a user should see; wherever that query is executed from.

In part 2, I’ll explain how RLS can be implemented in Oracle 10g utilizing auditing features.

Friday, November 24, 2006

Microsoft and Linux

If you are an Oracle expert or follower, you’ve probably been following Oracle’s Open World and its news. One of the biggest one was Oracle’s announcement of distribution and support of Red Hat Linux on October 26. This made the Linux Magazine to say “Oracle was predicted to make a big Linux announcement this week and you would be hard pressed to make a bigger one than this”.But I believe the second announcement from Microsoft and Novell that they would be working together on Linux wasn’t something less important than that.Yes, you red it right. Microsoft made a deal to be a distributor of Suse Linux, Novell's version of the operating system.Beyond the patent covenant and financial arrangements between them, it was revealed that, the two companies will work together on at least three projects.The companies will jointly develop a compelling virtualization offering for Linux and Windows. The result of which would enable Windows to run under Linux and Linux to run under Windows.
The two companies will be working together on Web Services.
And finally there will be an effort to create interoperability between Microsoft Office and file formats.Well, things are changing in the world of software development. I think we have to get ready for the time that .NET applications are fully portable to Linux.

Update (2013):

Mono has simply kept its promise of delivering a cross platform .NET development framework to this day. For example, we ported and recompiled an ASP.NET web application which was built with .NET framework 3.5 (utilizing many of its new features) and deployed onto a server running a CentOS linux.

Monday, October 23, 2006

Iterative Development

I'd like to write about something a little bit different from the other posts, Iterative Development.
Well, it may sounds easy and obvious. That’s what I thought when I heard it for the first time. But usually when I ask this simple question that what Iterative Developments is, I get a vague answer. Thus, I decided to write about it.
I use to think the most important part of development is to get the requirements and understand them intelligibly to start development. Well, understanding requirements is very important. But not to collect and understand all detail requirements at the beginning and to start development all the way to the end and maybe showing a prototype to the end user in one stage. Does this sound familiar to you? Well, that’s not what a proper Iterative Development is.
Yes, you have to have an overall and good enough knowledge of the stakeholders’ requirements at the first day, a vision. But detailing requirements and developing them in stages - that are called Iterations - and also informing stakeholders and getting their confirmation for what you have done at the end of each stage are what Iterative Development is composed of.
Maybe the following picture conveys what does that mean:

As you can see, this picture shows at each stage some of the requirements are detailed and developed and build. What it doesn’t show is the customer involvement in this process that can be seen in the next picture.

At the end of each iteration you will have a tested build that is representable to the customer and so you will be able to show him your progress and get his feedback and assess it. Of course after that ther will a chance that your plans and documents need to be refined. That shows you are progressing. That's the nature of this way of development.Regarding the management, I must say it’s not an easy responsibility to manage an Iterative Development process. Project manager, need to plan for each iteration before it starts and assess the result when it finishes.I think it's time to show you some of the advantages. To me the most important advantage is mitigating risks. Let’s take a closer look at one of the risks that every project may face, change requests.Which one is easier? Receiving a change request (that your customer thinks is quite small and easy change but in fact it can be devastating) at end of the development process or end of an iteration that you have invited him/her in to see that iteration’s output?Another benefit of iterative approach is having test and test plans early in project. It doesn’t seem right to postpone test to the end of project. You don’t want to see bugs and issues – especially big ones - at the end of development that you believe you are done. Do you?
I hope you got the picture.
One more thing, Iterative Development approach is considered troublesome if you don’t have experience or consultant. So please don’t get a real project – even a small one – and try to use this approach if you don’t have someone that has experienced this before. You might end-up with what I did once before.
Let me know if you disagree with me or you have a real case that you want to share with the others.

Sunday, October 15, 2006

How to pin objects in the Oracle's Shared Pool

In this post I'd like to write about Oracle's Shared Pool and the way it is being used and also introduce a package named "DBMS_SHARED_POOL".

Every Oracle Instance has one big memory structure called SGA (System Global Area). It can vary in size depending on the server's memory capabilities.SGA is divided into various pools and one of its critical pieces is Shared Pool. Critical in term of performance. A wrong sized Shared Pool can easily make the system look dead.There are many pieces of program data that Oracle keeps in Shared Memory.
For example, as you might know, executing a query is not a very simple process as database engine has to parse the query and find the most efficient way to execute it and do IO stuff and retrieve actual data.
So to not parse every single query, Oracle keeps the parsed queries in the Shared Pool. The next time that it needs to parse a query, it first looks at the Shared Pool to find out whether that query has been parsed or not. If it has, it reads the data from Shared Pool and continues.Let's say we have 100 users all executing same piece of program. The first execution, caches the parsed query and others just use it. So Shared Pool not only caches data but also shares them.As you could imagine, Shared Pool can not keep programs' data for ever. Sometimes it needs to take some of them out to make space for new data. Under those circumstances Oracle uses a LRU algorithm to erase data. LRU stands for Least Recently Used and it means if Oracle needs to make space, it will take those parts that have been used least out. That's what is called Aging Out. That peace of memory has been aged out.But sometimes it happens that you need to keep a piece of memory in the Shared Pool and make sure that it won't be aged out.
When? Well, if you have a large procedure that will be used frequently you might want to load it up at database startup time and keep it in Shared Pool.
DBMS_SHARED_POOL package has a procedure called KEEP. Using KEEP, you can make an object to stick to the Shared Pool. But not all kind of objects. Only Packages, Functions, Procedures, Sequences, Triggers and Types.If you want to undo the KEEP, you need to use UNKEEP procedure from the same package.
More information about the package can be found in Oracle documentation.By the way, if you don't have this package in your database, you can create it by running the
DBMSPOOL.SQL. You can find it at $ORACLE_HOME/rdbms/admin.

Tuesday, October 10, 2006

Team Development using MS TFS - Part 3

In part 1 and part 2 of this series we introduced TFS and its architecture.

I intended to close this series by writing about other aspects of team development using TFS. But a colleague pointed out that TFS deployment is not as easy as it looks to many. So I dedicate this post to the topic of installation. 
You can deploy TFS in two ways, single server or double server. Among reasons to use double server approach could be performance or distribution. In the double server implementation, application and database tiers will be physically separated and have more resources to utilize.

In this post I only discuss single server deployment.

Installation prerequisites 

1- User accounts

If the server in which you are installing TFS is part of a domain, you need three Active Directory domain user accounts.
If the server is a member of a workgroup, you can use local user accounts.

Sample user logon namePurpose
  • Used to run Team Foundation Server Setup.
  • This account must be an administrator on Team Foundation Server computers.
  • This account must be a member of the same domain as the following two service accounts. For example, you cannot have the two service accounts on a domain and then use a local account to run setup.
  • Used as the service account by Team Foundation Server Windows services (Code Coverage Analysis Service and TFSServerScheduler), and the SharePoint Timer Service.
  • Used as the application pool identity by the Team Foundation Server application pool (TFS AppPool), and the Windows SharePoint Services application pools (TFWSS and TFSWSSADMIN).
  • Must have the Allow log on locally permission on Team Foundation Server computers.
  • For optimal security, this service account:
    - Should not be an administrator on Team Foundation Server computers.
    - Should have the option Account is sensitive and cannot be delegated selected for Active Directory on the domain.
  • Used as the service account by SQL Server Reporting Services data sources.
  • This account should not be an administrator on Team Foundation Server computers.
  • This account must have the Allow log on locally permission on Team Foundation Server computers.
2- Login to the server using the Setup Account that you just created, TFSSETUP for example.
3- Install IIS 6.0 with ASP.NET enabled and without FrontPage server Extensions.
4- SQL Server Enterprise or Standard Edition. Please take note of following when installing SQL Server:
  • Install SQL Server as the default instance. Don’t name the instance.
  • Use the local system account for the service account. This should not be confused with the two service accounts you created prior to installation.
  • Install following services from SQL Services:
              - Database Services
              - Analysis Services
              - Reporting Services
              - Integration Services
  • Don't need to install Replication.
  • On the Report Server Installation Options page in the setup wizard, select "Install the default configuration".
5- Install SQL Server Hotfixes.
6- Install .NET Framework 2.0 Hotfix KB913393. It’s available on the TFS installation CD.

7- Install Windows SharePoint Service Service Pack 2. Install it with "Web Farm" option. You can download SharePoint Service Service Pack 2 from
8- Eventually, install Team Foundation Server. During installation, the setup wizard asks about the Service Logon Account and Reporting Logon Account which you got to use the TFS Service Account (TFSSERVICE) and TFS Report Account (TFSREPORT) respectively.


To make sure the installation was successful, after restarting the server, open a browser and open http://localhost:8080/services/v1.0/Registration.asmx and click on GetRegistrationEntries. On the GetRegistrationEntries page, click Invoke without entering a ToolID. GetRegistrationEntries is a web method of a web service that is running on the Application Tier. The SOAP response from web service should look like this:

<?xml version="1.0" encoding="utf-8" ?>

Installing a double server is not any different except that components are installed in different locations. Please feel free to share the problems you faced during TFS installation or administration.

Wednesday, October 04, 2006

Team Development using MS TFS - Part 2

In the first installment of this series, I introduced TFS.

This time, we are going to take a look at its architecture.

TFS is based on a multi-tiered architecture, consisting of a client tier, an application tier, and a data tier.
Data tier is a SQL Server that stores work-items, version-controlled source files, test results and other project metrics. (In TFS entities like requirements, test-cases, and defects are categorized under one group called work-items. Each sub-category has its own properties and you can extend the list of sub-categories.)
The application tier is composed of web-based applications that are integrated with IIS. These applications include SQL Server Reporting Services, TFS Core Services, and Windows SharePoint Services.Application tier hosts Team Foundation windows services as well.TFS uses Windows SharePoint Services to provide a project portal for team. SQL Server Reporting Services offers reports and queries from project’s work-items.TFS application tier also includes Team Foundation Build and TFS Server Proxy. Build server provides build automation. Server proxy has been built to be used in distributed developments. It caches previously accessed source control files to improve the performance of source control operations over low-bandwidth connections. Following picture shows a use case of TFS server proxy.

The client tier consists of a tool called Team Explorer. Referring to the picture in part 1, Team Explorer can be used as a standalone application or integrated with Visual Studio (except Express Edition). It also integrates with Microsoft Project and Microsoft Excel. This feature helps project managers to view project information and submit information to TFS using tools they are most comfortable with.
As for implementation suffice to say that there are two types of deployment (excluding Build and Server Proxy): single-server deployment, dual-server deployment (one server for data tier and one for application tier). Build and proxy servers can be installed on separate machines or on the same server that application tier has been installed. A complete installation guide can be found here.

Friday, September 29, 2006

Team Development using MS TFS - Part 1

This is the first part of a 3-part series about MS TFS (Microsoft® Team Foundation Server). Part 2 and Part 3 will follow.
To me, the biggest challenges of enterprise wide projects are managing the artifacts and deliverables in one single place, directing all development efforts toward one target, lack of project’s predictability, and last but not least distributed developments.

I’m mostly talking about development in Microsoft platform. Those who have worked with Rational Suite know how integrated and distributed it is. It allows you to manage all the retirements and their traceability, defects, and all the configurations in one single place. I’ve worked with Rational ClearCase and I must admit, although difficult, it's purely distributed.

That’s why I call the Team System the first grate step from Microsoft toward a real team development.

Now with that, a project manager can monitor everything in one single place and assign tasks and control the progress.
Architects, Testers, and DBAs can now be part of the game.
The initiative I like most in this product is its extensibility and customizability which I’m going to talk about it in one of the future posts of this series. 
A real team development using Microsoft .NET needs three things in term of tools: Visual Studio Team System, Team Foundation Server, and Methodology guideline.

Visual Studio Team System is series of role-based editions of Visual Studio. It offers different versions for three different roles; Architects, Developers, and Testers. Also there is a version called Team Suite that contains the functionalities of all those three versions that is suitable for those individuals that play all those roles in a project. Microsoft has recently released a new version named Team Edition for Database Professionals which is not included in above mentioned versions; even Team Suite.

Team Foundation Server is the backend engine for Team System that enables team to collaborate. Without TFS there is no team development. It’s this engine that keeps all the team members connected.
You are probably wondering what the Team Explorer in the picture is.
Well, it’s a member component of TFS through which you can explore a team project inside TFS. It can be used as a standalone client or can be integrated with Visual Studio. I should say it’s the client tier of TFS.

About the third piece, a methodology guideline, TFS carries MSF (Microsoft’s process framework) in two forms, Agile MSF and CMMI MSF.
If you need to embed other process guidelines in TFS, RUP for example, you need to use third party tools.
In future parts, I intend to write more about TFS's architecture, how it works, installation and configuration, how to embed RUP’s guideline, and how to manage it.

Friday, September 22, 2006

SQL Server guys' problem of learning Oracle - Part 2

In the first part, I briefly wrote that there are fundamental differences between Oracle and SQL Server and in this episode we'll look at a few examples. The intention is to make it easier for those who know about one product and would like to know about the other. It's also my intention to conclude that designing applications to be database agnostic is not a good practice.

In my opinion, there are two reasons for facing difficulties when someone (SQL Server DBA or Programmer) wants to transition to Oracle and vice versa.

1- Similar terminology but different implementations.
2- Utilizing the database in applications the same way.


The most similar and confusing term between these two DBMSs, I believe, is “Instance”.
What is called instance in SQL Server is a collection of windows services that manages SQL Server Databases. You can have more than one database attached to an instance at any point of time.
In Oracle, however, an instance is a little bit different and so the way Oracle DBAs manage it.
Unlike SQL Server each instance in Oracle can mount and open only one database at any point of time. If you want to have more than one database open and running simultaneously, you’ve got to have another instance. Creating an instance in Linux came easier to me than creating one on Windows. In Windows every instance is represented by a service and so to have a new instance you ought to create a new service using a command-line tool. But in Linux, you need to take care of a few files and folders and then start it up.
Also each Oracle instance needs a "parameter" file to start working called Server Parameter File (SPFile). This file defines the attributes and characteristics of the starting instance.
 Each SQL Server instance comes with a few databases including "System" using which instance manages other databases. But since an Oracle instance can open one database at a time, at the creation time, is nothing more than a few files and folders.


In applications that utilize SQL Server it's common to open a connection to database for each executing statement. If you are going to do six queries, you might well see six connections. SQL Server was designed that way - much like Windows was designed for multithreading, not multiprocessing. In Oracle, whether you want to do six queries or six hundred queries, the maximum number of connections you want to open is one. Each Oracle connection was designed to handle multiple sessions. So a single Oracle connection requires much more memory than a single SQL Server connection. Essentially, what's common practice in a SQL Server database is simply an incorrect approach against an Oracle database.

To conclude, as an architect or developer, it's important to remember that when porting applications from one database to another, you ought to take into consideration the time that is needed to refine and adjust application's architecture to be coherent with the database's architecture. I notice development teams choose development frameworks specifically because it helps their application to be database neutral without understanding the implications. Even Oracle provides two separate database management applications for Linux and Windows even though Java runs in both. That's not because Oracle can't ship one application. It's solely because Windows and Linux are fundamentally different. One uses processes (Linux) and the other threads.

If you are in the same situation and having problems, please don’t hesitate to contact me and share your problems.

Saturday, September 16, 2006

SQL Server guys' problem of learning Oracle - Part 1

In the first episode I’m going to write about the main issue that a SQL Server expert might face as he start learning Oracle.

I’ve always liked Microsoft SQL Server not only because it’s easier to manage and I can be more productive with – which isn't the whole point, of course - but also because it’s one of Microsoft’s products with which I have least amount of trouble (e.g.: bugs, security holes, etc) compare to other products like Visual Studio or Windows itself.
After a while, I realized knowing about other DBMSs is necessary, too. Because working with similar products in the same field gives you a lot of new ideas and, at the same time, options.

Starting with Oracle can be confusing if your mind is set on SQL Server. I, myself, had to start by figuring out differences of terminologies ("instance" and "database" to begin with). Oracle databases I setup were running into numerous number of issues. And it was all merely because I wanted to do things the same way I use to do in SQL Server.
It’s true that they have similarities, lots of lots of similarities. But the way they work are different. If you are a developer that doesn't look at database as a black box, you understand what I am talking about.
In my opinion, you have to learn Oracle from scratch, in the same way you learned SQL Server. This seems very obvious. But many good developers I run into don't seem to have realized this.
You have to understand how things really work in Oracle and solve problems in the Oracle way, not in SQL Server way.
This would be true as well if you were an Oracle DBA or programmer and you wanted to learn SQL Server. That’s why in the next post I am going to write more about this issue and compare quite a few concepts of Oracle and SQL Server that sound similar but are not and show you how they are different. That will hopefully make it easier for those who work with SQL Server to start learning Oracle.
There is a book from an expert that could make the learning process easier; “Expert Oracle Database Architecture” by “Thomas Kyte”. I wish it was available when I was starting with Oracle.

Thursday, September 14, 2006

SysML and an example from RUP

Hi everyone

I'm so excited to make this post as it's the first one in my first blog.You know, I was thinking what I should really start with; Database, C# or something not technical as I'm at the beginning. But then I decided to start with something that I love most, RUP.Well, this one is not purely about RUP. But the sample that I'm gonna use here is from RUP.
Here we go...
SysML (System Modeling Language) is a Modeling Language proposed by OMG (the consortium that has developed UML) and its specification has been developed by many companies including IBM.Yes, you might say it’s a sort of extension to UML and you are right. I believe soon or late it will become the next famous modeling language.But you may ask why OMG has proposed a new Modeling Language? What is the UML’s weakness?Well, I suggest you to read a 3-parts article from IBM's developerWorks web site to figure that out. But to keep you excited I give you an example from RUP:
In RUP’s Inception phase, there is a workflow named Understand Stakeholder Needs. There the System Analyst has a few tasks to do - which involve stakeholder as well - to understand requirements by collecting information.As you might be able to see from the RUP's guideline, there is a task named Manage Dependencies that has a few output artifacts including Use-Case Model and Supplementary Specifications.In use-case model you can only model the functional requirements of the customer. The non-functional requirement (i.e.: Performance) has to be documented in the Supplementary Specifications Document.But in the SysML - as you may read in the article - there is a diagram named Requirements that allows you to model both types of requirements. So you have a single place to look at your requirements and manage them.
That's it for the first post.
I appreciate your comments and feedbacks, a lot. Please let me know how much this post was helpful to you.