Sunday, January 11, 2015

2014 in terms of Data Security for the Industry

Besides being a bad year in terms of air-disasters, 2014 has left some ugly scars in term of cyber-hacks as well

The most significant ones started with EBay hacks going back to Feb/Mar when user credentials and critical info of 223 million users were siphoned off by the hackers, with losses in tune of 145 million dollars for the company. JPMorganChase was next in the line where hackers stole information related to 80 million household, and 7 million small to medium-sized businesses, one of the largest breaches in banking history. iCloud hacks leaking private pics of Hollywood celebs was another unexpected one in series

It was one of most brutal attack on safety of user data when news of Target breach was aired, with 110 million records of buyer information stolen from its servers ... total cost of the breach has exceeded 150 million for the company. Data breach reported from Home Depot was the last nail in coffin, with sensitive information stolen for 56 million customers. And the list is endless, AOL, UPS, Yahoo Japan, Staples

Hackers have different way to capitalize on the stolen booty. Like in one case, hackers stole the customers’ credit card data from P.F.Chang between March 2014 and May 19 2014, and then put it on sale for between $18 and $140, depending on how fresh the stolen data was. The restaurant chain was forced to go low-tech and started using old age manual credit card imprinting machines, until it invested millions to upgrade its terminals to enforce strong-encryption algorithms

However, the world won’t be same again for Sony Pictures. Whatever might have been the reasons or intent, the copyrighted intellectual data in tune of 100s of terabytes stolen from Sony servers, has shifted focus on the needs for IT security. Taking a lesson from this catastrophic loss of Sony, hundreds of industry majors are planning to put a greater volume of their budget spends (in 2015-2016) on taking preventive measures, to ensure safety and security of customer sensitive and intellectual data

And it’s not that companies don’t take a note of these breaches. They do, but they are reactive in nature, and they don’t help in gaining back what is lost. Besides sky-rocketing costs associated with these breaches, the worst loss for any of these companies is the loss of customer confidence on that company's data security measures and related policies. In the wake of increasing heat, maybe Target decided to fire off its CEO, but it couldn’t help control its quarterly losses, and the investors started looking for safer ventures

Taking a closer look at these events, all these breaches are brainwork of implanting malware to log keystrokes, gain backdoor access, some intelligent guesswork, brute-force, cyber-sniffing blended together with some tailored tech advancements. Significant %age of these factors rely on brute-force, and intelligent guesswork based on patterns suggested by customized softwares. This is where we all as an individual can give a tough fight. Most of our password are based on plain words or names, at max combined with a number or two. Making our brain lazier in remembering, we keep the same password for multitude of online accounts, which in turn makes us more vulnerable. Making it a breeding ground for attracting these unscrupulous events, we rarely think of changing our passwords at frequent intervals. That’s all the hackers need for their perfect world. Using a combination of special characters with alphabets and numbers, and changing it at frequent intervals, are enough to give smartest of hackers a good run for their money. Another rule of thumb is not sharing the password in any event, unless it happens to be a shared account and there’s an absolute need to retain shared access

The chosen few of this post's readers, who also happen to be a technologist by profession, have a great deal of role to play. Being a techie, we all can be watchdogs in our individual role, to identify all possible breeding ground for these events, and work towards filling all the potential gaps. Indeed, regardless of the strongest fencing you do in terms of measures taken to avoid these attacks, one cannot guarantee as foolproof fortification against any of these events. However, we need to think about all possible ways to circumvent these in advance, by being extra cautious about the security of user data we handle and manage. This might require educating our customers to make them understand the importance. In the event of customer not paying heeds to this, it will at least avoid us from being in the same boat as USIS which came under fire when it suffered a data breach. Reason for coming under fire : Being a contractor for Dept. of Homeland Security, it had millions of records of information related with citizens’ background checks, and other critical information

Its true that we learn from mistakes, but sometime the cost related with a mistake is so gigantic, that we can’t afford to commit. And in this case, every cognizant effort taken towards securing our customer’s sensitive data, COUNTS !!!

Monday, December 8, 2014

Dont stress your database beyond its limits, or otherwise it might commit suicide

Running into ORA-600 is somewhat a daily affair for an Oracle DBA, but an ORA-600 that led the database to commit suicide, could be classified a true example of "once in a blue moon" event

As part of a consulting engagement in Pittsburgh, I was working to troubleshoot the erratic behavior of database parallelism. At a point, we decided to add some parameters within RMAN to test the impact. As a part of scanning all of the traces generated from crash dump, looking for pointers and possible trends, we witnessed something that appeared to be quite unusual, with the traces admitting database having committed suicide

/u01/app/oracle/diag/rdbms/dwdev/trace>tail -10 /u01/app/oracle/diag/rdbms/dwdev/trace/dwdev_smon_23343.trc
0xffffffff7df70000 - 0xffffffff7dfc0000    320K    8K     0xffffffff rw---  [ anon ]
0xffffffff7dfc0000 - 0xffffffff7dfe0000    128K    8K     0xffffffff rw---  [ anon ]
0xffffffff7dfe0000 - 0xffffffff7dff0000     64K    8K     0xffffffff rw---  [ anon ]
0xffffffff7ffb0000 - 0xffffffff80000000    320K    8K     0xffffffff rw---  [ stack ]
******************* End of process map dump ****************

internal error ORA-600 seen on the error stack process appears to be having problems repeatedly, committing suicide Background_Core_Dump = partial
ksdbgcra: writing core file to directory '/u01/app/oracle/admin/dwdev/cdump'
/u01/app/oracle/diag/rdbms/dwdev/trace>

As a followup, we checked to realize the smon process was indeed dead ...

Thursday, May 2, 2013

Concurrent Manager should share the ride with Applications Tier Van, or hop on to the Database Van


Numerous posts are flooded across the Internet detailing the benefits of Concurrent Managers hosted on the Database Tier, as compared to the same running on the Applications tier node with its other siblings, viz. Admin tier, Web Tier, Forms tier, etc. Indeed, it has been predominantly one of the most popular debates among industry DBAs to discuss the benefits of Concurrent Server hosted on the Database Tier, vs. Applications Tier

The results of this debate has never been unanimous, and despite numerous statistics indicating Concurrent Manager not benefitting significantly from being hosted on the Database Tier, there have been far more number of reported incidents where Concurrent Manager did get benefitted in terms of response time and throughput when hosted on the Database Tier. At last, Oracle decided to make it a standard recommendation to host the Concurrent Processing with the Database Tier in a multi-tier environment, if possible

This recommendation ruled the Oracle E-Business Suite Architecture platform for quite a while, but was pulled over for a fresh round of discussion when Rel 12 came in with a new architectural layout. The reason for pulling this item out of its grave was the way Applications tier components were re-organized in the new structural layout of Rel 12. In Rel 12 which comes with Unified Appl_Top that gets installed by default, the Applications components such as Concurrent Processing, Web tier, Admin tier, Forms tier, etc. is present on all of the Applications tier nodes. On the other hand, it was possible to separate the Applications tier components till 11.5.10.2, unlike Rel 12 where difference between nodes is solely dependent on the service groups activated on the Applications tier nodes. Few of the significant service groups worth considerations could be the Root services (OPMN), Web Entry Point services (Oracle HTTP Server), Web Applications Services (OC4J Components of OACORE, Forms and OAFM), Batch Processing Services (Applications TNS Listener, Concurrent Manager, Fulfillment Server), and other Service Groups (Oracle Forms Services, Oracle MWA Services), etc.

Conclusively, the recommendation of having Concurrent Manager share the ride with Database tier, is losing ground in Rel 12 and driving the industry-experts to recommend Concurrent Processing on its own separate tier, or with the rest of the Applications tier components. Another pseudo benefit of removing Concurrent Manager from the Database tier, is an increased efficiency in the regular manageability aspects and periodic maintenance activities like patching and cloning which would need to be performed on at least one less Applications tier node

Oracle has a dedicated Document detailing about this topic in My Oracle Support with Id 406558.1

Wednesday, May 1, 2013

Journey of Oracle Database : From Object-Oriented ... To Internet ... To Grid ... To Cloud ... To ...



Oracle is an organization I have always liked for being uber-audacious in implementing the latest and greatest in the technology arena, to revolutionize the database vertical. Despite being a product company, Oracle has always grabbed all possible avenues to reinvent the wheel whenever it got an opportunity

It all started in 1999, when Larry decided to break free from adding enhanced object-oriented features in its Database versioned 8. He planned to add features to enable the Database inter-operate with the Internet in a more efficient and resilient fashion, and captioned it as 8i which came with a native Java Virtual Machine known as Aurora. He went on to further solidify the foundation of Internet-Applications serviceable database in 9i, by adding around 500 new features

Oracle did an extreme analysis of its 8i and 9i versioned databases supporting thousands of Production instances across the globe, and decided the databases' computing and resource needs were so widespread and ever-growing in nature, it was next to impossible to scale them whenever required, in a traditional way. This requirement acted as the foundation stone to focus on scalability and resource allocation in a dynamic fashion. Oracle did everything to address this in its next reinvented version of database, under the flagship of 10g. The g stood for Grid Computing Architecture, and it was based on the concept of Electricity demands that originate from the source (a household) is provided without communicating the source from where it is being generated or transmitted. Similarly, the business applications in need of database computing resources, are served irrespective of the dynamic scale-up carried out behind the scene, if there was a surge in the resources' requirement

Another aspect worth notice in its 10g version was the stress put on the High-Availability of the Databases. Substantial %age of the databases servicing the Production Applications across the globe, were servicing some or other kind of mission-critical applications, and the possibility of database outages (if unscheduled) sounded scary enough to awaken the CIO from sweet-midnight dreams. In other words, the unscheduled outages were simply unaffordable owing to their catastrophic results. Oracle went all the extra miles to focus on High-Availability options to ensure Database survives the Mayan Doomsday of 12/12/12, if not a real Doomsday

This grid architecture backbone released in 10g, was further strengthened in 11g. Yet again based on the latest shifts shaping up in the Database and Computing World, the Cloud had started becoming the buzzword. Since the Applications had started taking the Cloud route, it was paramount to reinvent the wheel for making the Database Cloud-Ready. Oracle has taken the leap to make the next version of Database Cloud-ready by announcing for Database 12c

For those who are DBAs, there are plethora of interesting features wrapped in this new version of 12c. However, since the product is still in the Pandora's box, we need to wait till it gets out of it
Few of the salient features worth considering is the concept of Pluggable Databases, which means system metadata and data is kept in a section called Container Database (CDB) whereas the user metadata and data is kept in Pluggable Database (PDB). This segregation does not only provides better out-of-box manageability features, it also goes a long miles in regular maintenance activities like cloning by reducing the overall cloning time and related resource utilization, or periodic maintenance activities like database migration, where only PDB need to be plugged out and back in, e.g. from 12c Rel 1 to 12c Rel 2
Another feature worth considering is Oracle’s emphasis on achieving maximum performance gain from storage and I/O media, in an automated fashion. The term going to be used for this is Heat Map, which is an automated mechanism to identify the Hot Blocks of data (which means the data most frequently used by the applications), and re-organize them on storage media to achieve least access and I/O time
For customers running 10g and 11g DataGuard with TAF (Transparent Application Failover) feature, there isn’t any option to failover the in-flight Insert/Update/Deletes under-progress in the event of a database failover owing to a crash, which is tentatively fixed in this version, making it safer for organizations like banking institutions. However, like I said earlier, since the product is still behind the curtains, we need to wait till Larry pulls off the curtain and makes it public … Go Larry Go