Search

OakieTags

Who's online

There are currently 0 users and 25 guests online.

Recent comments

Oakies Blog Aggregator

From Product X to SQL Developer

I recently worked at a company that used "Product X" for all of it SQL and PL/SQL activities.  There’s no real need to reveal what "Product X" is, because this isn’t a post about whether as a product it was good, bad or somewhere in between.  "Product X" met the needs of the Oracle developers in the company, and that’s all that matters.  There was just one issue – for Product X, the company was paying a little under $100k annually in license fees.

Enter SQL Developer.  Similarly, it’s not particularly relevant whether I think, or anyone else at the company thinks its better or worse than "Product X".  What does matter is – can it do the required tasks of the developers.  And the answer to that is a resounding yes.  And once that question is answered, then paying any kind of annual license fee for Product X becomes not an intelligent business choice.  So the organisation is in the process of moving to SQL Developer.  The trick now is how to enforce assist :-) the developers with moving to SQL Developer.  And the reality is – once you show developers how to re-map the shortcut keys in SQL Developer to match what they were used to in Product X, then the task is virtually done.

But some developers still clung to Product X…and hence were threatening a violation of a the no-longer current license agreement.  So the task then became – how to stop dissuade :-) them from still using Product X.  This post gives you a few ideas on how you might want to tackle it.

Version 1

I assumed the best of our developers :-) and put a simple login trigger to check if the program they’d connected with was "ProductX.exe".  If it was, then a job was submitted (via dbms_job) to pop an email to them to let them know that Product X was no longer the company standard. 

Version 2

But old habits die hard…and within a few days, some developers had renamed ProductX.exe to SQL Developer.exe and were back on the database thrilled with their ingenuity :-) So the trigger was adjusted to also check the MODULE, as ProductX also set its session module value to "Product X".  But, you can’t keep a good developer down, and with a little assistance from hex editor, the module was now cleared.

Version 3

Most development tools, when they connect to the database, run a series of queries to check things like privileges and the like.  So by tracing a connection by ProductX, a list of cursors that were executed could be identified.  So the login trigger was altered to always submit a job.  The job would check the program and module were checked as before, but it would also check V$OPEN_CURSOR for the new session. If it contained a selection of the statements that Product X would execute, then the trap has been sprung :-)

 

So Version 3 has been active for a couple of weeks now, and plenty of emails have been sent.  It will soon be amended to terminate connections, and thus, the migration to SQL Developer will be complete :-)

EM Express versus EM Cloud Control

Recently I was asked the question “What is the real difference between EM Cloud Control [NOTE: I’ll refer to this as EM12c through the rest of this post] and EM Database Express in 12c?” It was (for me) a pretty easy question to answer, but I wanted to provide the questioner with a link to the place in the Enterprise Manager documentation where it covers that in detail. Somewhat to my surprise, I wasn’t able to find such a link – well, not quickly anyway. I think the reason for that is the documentation for EM Express (as it’s more commonly abbreviated to) falls under the database documentation which is owned by a different group in Oracle than the Enterprise Manager documentation. Well, that’s my speculation anyway. It may just be there in the documentation and I couldn’t find it in my quick search. :) Be that as it may, if I couldn’t find it in a quick search then I suspect others would have the same issue, so I thought it was worthwhile to bash out a quick post on the subject. And of course, what was originally going to be a quick post turned into something much bigger, so if you want, use the links below to move through the post:

What is EM Express?
What is EM12c?
What are the Differences Between the Two?

What is EM Express?

Before I start on what the differences are, it’s probably worthwhile to spend a little time explaining what EM Express actually is. The first thing to note is what it replaces, and that’s the Database Control product. Database Control is no longer available as of Oracle Database 12c. In its place, the tool to monitor a single database is EM Express. EM Express – or to give it its full name, Oracle Enterprise Manager Database Express 12c (now you can see why it’s normally called just EM Express!) – is an integrated GUI management tool for database administration of a single Oracle Database 12c database. Let’s explain that in a bit more detail.

Integrated Installation

EM Express is preconfigured and installed when the database kernel is installed, so when you install your ORACLE_HOME, you have also installed EM Express. If you have used DBCA to create a database, one of the questions you will be asked is whether you want to manage this database with EM Express or EM12c. You can actually select both checkboxes, though I found the last time I did that (admittedly quite some time ago when Oracle Database 12c was relatively new) that the installer got a tad confused by selecting both and it was better to choose one and then set up the other later. EM Express runs inside the database and has no extra middleware components (this is one area of differentiation between EM Express and the earlier Database Control product). EM Express leverages the XDB server inside the database. It support both single instance and Real Application Clusters (RAC) databases.

Small Footprint

EM Express has a fairly small footprint in terms of disk storage, memory and CPU usage. The disk storage added to the kernel by EM Express is only around 20 Mb. It requires minimal CPU and memory – in fact, none until you actually connect to it. All the UI rendering is done in the browser being used, reducing the footprint even more, and the database server is only used to run the SQL needed to retrieve the information being displayed in the GUI.

Comprehensive Administration

OK, comprehensive may be optimistic. :) This is one area where the tool doesn’t quite measure up as yet compared to Database Control, in that EM Express provides only basic administration support for configuration management, storage management, and security management. No doubt as new releases of the tool come out, this coverage will expand. Currently, the tool provides the following functionality:

  • Configuration Management – initialization parameters, memory, database feature usage, and current database properties
  • Storage Management – tablespaces, undo management, redo log groups, archive logs, and control files
  • Security Management – users, roles, and profiles
  • Performance Management – Performance Hub and SQL Tuning Advisor

One area where EM Express far outstrips Database Control (and in fact, even surpasses similar areas in EM12c) is the performance side. EM Express has quite advanced performance diagnostics and tuning capabilities. However, you do need to be licensed for the Diagnostics and Tuning packs to use these pages.

You can find a little bit more on managing Oracle Database 12c with EM Express in my presentation from the APAC OTN Tour back in 2013, located here.

What is Enterprise Manager Cloud Control 12c?

OK, so that’s what EM Express handles. Now let’s have a quick overview of the EM12c product.

To start with, EM12c is more of a product line or product family than a single product, but it still allows you to manage your entire Oracle infrastructure from a single pane of glass, from application to disk. It even allows you to manage your non-Oracle infrastructure, with a variety of plugins and connectors to manage everything from non-Oracle databases (like SQL Server and DB2) to non-Oracle middleware (like WebSphere, Microsoft Active Directory and JBoss Application Server) and so on. And if there’s something it doesn’t yet have a plugin or connector for, you can use the Extensibility capabilities to write your own.

If you look at the key capabilities of EM12c, it really boils down to three main areas:

  • Cloud Lifecycle Management – EM12c helps enterprises discover their IT assets and plan their cloud by providing consolidation recommendations, and has a guided cloud setup wizard to help you define services such as Infrastructure as a Service, Middleware as a Service, and Database as a Service. With later releases, you also have Schema as a Service, Pluggable Database as a Service, Testing as a Service, and Snap Clone functionality. It also includes an out of the box self service portal capability, along with API’s that can be used to create your own self service portal. Finally, it has chargeback capability that allows you to provide business users a comprehensive report of how much computing resources they are using, and even to invoice them for the costs of those resources if you want to.
  • Complete Application to Disk Management – EM12c can be used to monitor and manage every layer of your infrastructure from the application all the way down to the storage, including:
    • Application Management – EM12c provides application management solution for packaged, third party and custom applications, including E-Business Suite, Siebel, PeopeSoft, JD Edwards, Fusion Apps, and Java or SOA based custom and third party apps.
    • Middleware Management – EM12c also manages the many Fusion Middleware components, such as WebLogic Server, SOA Suits, Coherence, Identity Management, Business Intelligence, Glass Fish, Jolt and WebCenter
    • Databases Management – this is the area EM12c has been well known for over many years. Indeed, the first version of Enterprise Manager I ever saw, the 0.76 beta release, already contained a lot of database management capabilities. Obviously, those have all expanded over the years as the database capabilities also have, but now includes additional functionality on top of the native database capabilities in database lifecycle management areas like automating mass database upgrades and patching.
    • Hardware and Virtualization Management – Traditionally, Enterprise Manager has been focused on software management. EM12c expands that to provide management capabilities like provisioning, patching, monitoring, administration and configuration management for both hardware and virtualization layers too. This is particularly true in the case of Oracle’s Sun hardware line.
    • Heterogeneous Management – Even though Oracle’s footprint has been expanding into so many different areas over the years, there are still going to be some non-Oracle products at most customer sites that need managing as well. Oracle has a number of plugins and management connectors that can be used to monitor and manage these third-party products. EM12c also includes an Extensibility Development Kit (EDK) to allow you to write your own management plugins. Many of these are available for other customers to use via the Extensibility Exchange.
    • Engineered Systems Management – EM12c is tightly integrated with a range of Oracle’s Engineered Systems, such as Exadata, Exalogic, Exalytics, SuperCluster, Virtual Compute Appliance (VCA), Big Data Appliance, and Oracle Database Appliance (ODA). That list just keeps expanding as more engineered systems are released, so customers can take advantage of integrated hardware, software and management capabilities that are engineered together to form a complete solution.
    • Application Quality Management – EM’s AQM products provide an end-to-end testing solution, including Application Testing Suite (OATS) which has test management, functional testing and load testing capabilities, Application Replay and Real Application Testing to provide production-scale testing of the application and database infrastructure, and test data management capabilities like data masking and subsetting
    • Application Performance Management – APM delivers business driven application management with end to end monitoring that includes user experience management (which includes both real user monitoring via RUEI and synthetic transaction monitoring via SLM beacons), business transaction management (which allows you to monitor and trace transactions from a business perspective), Java and database monitoring and diagnostics, and application performance analytics.
  • Enterprise-grade management – Not only is it important that you have the ability to manage your Oracle infrastructure with EM12c, but it’s also important that EM12c itself can support mission critical workloads. Way back when it really didn’t matter most of the time if your management tool went down every so often, but now people are realizing it’s as important to have mission critical management for your mission critical workloads. It’s not unusual to find Enterprise Manager installations protected by RAC databases with Data Guard as well, along with software load balancers and the whole enchilada. EM12c has been thoroughly tested for scalability (see here for one of my very early posts on this, and just recently we saw that very same environment hit 2 million targets!). EM12c’s Self Update framework ensures you can easily incorporate management of the latest and greatest products from Oracle. And in addition, the security model introduced with EM12c allows tightly controlled access to the EM12c product itself. So all of these points allow you to be sure EM12c can manage the needs of even the largest environments
  • So What are the Differences Between the Two?

    Now you know what the two products are capable of, you can probably pick out some of the differences. :) But let’s be 100% clear on these.

    The most obvious difference is that EM Express is a management tool that manages a single database, while EM12c manages your entire Oracle (and non-Oracle) data center. If all you are needing to manage is a few Oracle databases, then EM Express will be more than capable of meeting your needs. You will have to connect to each database separately to manage it though. However, as soon as you need to manage more than a few databases, or you want to manage more than just databases (i.e. you want to manage middleware, apps and so on), EM12c is your tool of choice. We have customers with literally thousands of database targets they need to manage. Try doing that with EM Express! :)

    The second difference is that EM Express is an agentless architecture. Everything you need to connect to EM Express is built into the database kernel. You just need to point a browser to the correct page, and away you go. EM12c, on the other hand, uses an agent to both discover and then manage targets. For each host that you add to an EM12c environment, you add an agent to that host and it then discovers and promotes all the targets (such as databases, listeners, WLS environments and so on) that are on that host. When you need to perform a task on those targets (such as shutting down a database), EM12c communicates with the agent on the host containing that database and it sends the message to the database to shut down. If you think about it, using the agents like this is really the only way to ensure a scalable solution in large environments.

    The third difference is functionality. As I mentioned above, EM Express provides a specific set of functionality, and if you want more you need to step outside of EM Express to either a tool like SQL Developer (I haven’t touched on that here because the article would be even longer, but if you want more on that go and visit Jeff Smith’s site for more details) or EM12c. Just a couple of examples – metric extensions, Data Guard management, and chargeback are all only available from EM12c rather than EM Express.

    The fourth difference is security. To log in to EM Express, you need to be a user that has the DBA, EM_EXPRESS_BASIC or EM_EXPRESS_ALL roles. The EM_EXPRESS_BASIC role provides read-only access to EM Express, so users with that role can view the UI but not make any changes. This role also includes the SELECT_CATALOG_ROLE role. The EM_EXPRESS_ALL role obviously grants a lot more access. :) It has full read / write access to all EM Express features, and includes the EM_EXPRESS_BASIC role. Compare that to the security model for EM12c, where there are quite a number of roles and over 100 fine-grained privileges, and you can see that EM12c has a much more granular security model.

    There are probably other more minor differences between the products that I haven’t covered here, but to my mind these are the main differences. Hope that helps explain the differences between the two!

    Friday Philosophy – Why I Volunteer for User Groups

    I’ve just noticed a new page about me popping up on the UKOUG web site – It’s in the section about volunteer case studies, alongside people like Joel Goodman, Simon Haslam, Carl Dudley, Jason Arneil, Brendan Tierney and others who have been stupid good enough to give time and effort to the UKOUG.
    {You can get to the page by going to the UKOUG home page (www.ukoug.org) and clicking the Membership or Member Activities tab and Case Studies & Testimonials under that and finally Volunteer Case Studies. Phew. Or follow the link I gave at the start and click on the other names.}

    I’m not sure how long I’ve been up on there but only a couple of days I think.

    Anyway, Why DO I volunteer for user groups?

    The little bio covers most of it but I thought I would put some words here on my blog too. I volunteer because, fundamentally, I am a socialist (with a small ‘S’) – I feel that we are all better off if we all help each other. I’ve been helped by people in my career (presenting stuff I don’t know, giving advice), I guess I feel that I should return that favor. Many of the people who have (and continue) to help me stand nothing to gain personally by helping me. In fact, one or two have helped me when, strictly speaking, they are helping create a rival for work opportunities. I try to do the same to those around me. I know, it sounds a bit “Disney film teaching the kids to do right” goody-two-shoes, but that is the core of it. And there are some other aspects to it too…

    Why do I volunteer for the UKOUG specifically? Because they are THE main user group in my geographic area and provide the most support to the Oracle user community here in the UK. Most of the people involved in the UKOUG are just nice people too. But I also support and volunteer for smaller user groups, mostly by either promoting their meetings, going to them or presenting. I started presenting at the main UKOUG conference back when Dido, Eminem and Christina Aguilera where in their hey-days. I also went to the RDBMS and similar SIGs and before long I was presenting at them and then got sucked into chairing one of them – the Management and Infrastructure SIG. I’ve been slowly sucked in more & more as the years role by.

    That has led on to me presenting at other user groups in different countries. Actually, I used to do quite a bit of presenting abroad (mostly the US) around 10 years ago, but that was part of the role I had at the time and my employer paid the bills. No employer to pay the bills now, but then as it is my time I try to make presenting abroad also a chance to have a short holiday, I try to take a day or two one side or the other of the event to look around. And actually, it is nice spending time with other people who present at or attend user group meetings.

    Another part of it is I just like presenting. This is not quite so Disney Nice Guy, there is an aspect that is more selfish, that standing up, being listened to and telling people stuff that maybe they don’t know makes me feel better about myself. Better about myself? OK, I’ll let that stand for now but it is more that it makes me feel I am achieving something and having an impact. That I am useful. Fundamentally it is still a desire to help and presenting does not scare me (I know it is scary for a lot of people, but then a lot of people are not scared of heights and I am – it all balances out). But with a slice of “look at me!!!” thrown in.

    There are also rewards for the effort. I’ve got to know a lot more people as a result of presenting, blogging (and now tweeting) than I would have had I stayed just one of the audience. For me it has helped me make more friends. As I said above, part of what is now nice about user group meetings for me is meeting friends I’ve made who are also on the speaker circuit and there is inevitable a few drinks in the evening whenever there is a user group. It also gives you more exposure in the community and helps lead to job opportunities – or at least that is the theory. No one has yet offered me a job because they liked my blog post or presentation!

    That leads me to the last aspect of volunteering. Some people volunteer primarily for selfish reasons. To get bragging rights, get it on their CV’s, to help them get sales contacts or better jobs. The odd thing is, people who do it for those reasons tend not to last – as volunteering for user groups is a lot of hard work to get those rewards. You can usually spot them as they are the ones who don’t actually do a lot or complain all the time about the coffee being bad (actually, usually the coffee IS bloody terrible) and other things. Don’t get me wrong, some of those rewards do come with the volunteering, but if someone is volunteering primarily to get them, it does not seem to work out for them. Or maybe that is my socialism coming out again :-). Fundamentally, I think volunteering only works if, at the core of it, you want to help other people. Maybe that is why other volunteers are such nice people to hang around with.

    Why do you do it? (or not).

    EM12c : Login to GUI with the correct password causes authentication failure

    So the other day I was trying to log in to my EM12c R4 environment with the SSA_ADMINISTRATOR user, and I got the error:

    “Authentication failed. If problem persists, contact your system administrator”

    I was quite sure that the password that I had was correct, so I tried with the SYSMAN user and had the same error. I still wanted to verify that I had the correct password , so I tried with the SYSMAN user to log in to the repository database, and was successful, so I know something was wrong there.

    
    SQL> connect sysman/
    Enter password:
    Connected.
    
    

    So I went to the/em/EMGC_OMS1/sysman/log/emoms.log and saw the following error

    
    2015-05-18 21:22:06,103 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR audit.AuditManager auditLog.368 - Could not Log audit data, Error:java.sql.SQLException: ORA-14400: inserted partition key does not map to any partition
    ORA-06512: at "SYSMAN.MGMT_AUDIT", line 492
    ORA-06512: at "SYSMAN.MGMT_AUDIT", line 406
    ORA-06512: at line 1
    
    

    Which led me to believe that the JOB_QUEUE_PROCESSES was set to 0, but it wasn’t the case, since it was set to 50. Though, this is actually an incorrect limit, so I bumped it up to 1000 and tried to rerun the EM12c repository DBMS Scheduler jobs as per the documentation in 1498456.1:

    
    SQL> show parameter JOB_QUEUE_PROCESSES
    
    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    job_queue_processes integer 50
    SQL> alter system set JOB_QUEUE_PROCESSES=1000 scope = both;
    
    System altered.
    
    SQL> show parameter both
    SQL> show parameter job
    
    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    job_queue_processes integer 1000
    SQL> connect / as sysdba
    Connected.
    SQL> alter system set job_queue_processes = 0;
    
    System altered.
    
    SQL> connect sysman/alyarog1605
    Connected.
    SQL> exec emd_maintenance.remove_em_dbms_jobs;
    
    PL/SQL procedure successfully completed.
    
    SQL> exec gc_interval_partition_mgr.partition_maintenance;
    
    PL/SQL procedure successfully completed.
    
    SQL> @$OMS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_recompile_invalid.sql SYSMAN
    old 11: AND owner = upper('&RECOMPILE_REPOS_USER')
    new 11: AND owner = upper('SYSMAN')
    old 26: dbms_utility.compile_schema(upper('&RECOMPILE_REPOS_USER'),FALSE);
    new 26: dbms_utility.compile_schema(upper('SYSMAN'),FALSE);
    old 41: WHERE owner = upper('&RECOMPILE_REPOS_USER')
    new 41: WHERE owner = upper('SYSMAN')
    old 84: AND owner = upper('&RECOMPILE_REPOS_USER')
    new 84: AND owner = upper('SYSMAN')
    old 104: AND ds.table_owner = upper('&RECOMPILE_REPOS_USER')
    new 104: AND ds.table_owner = upper('SYSMAN')
    
    PL/SQL procedure successfully completed.
    PL/SQL procedure successfully completed.
    
    SQL> connect / as sysdba
    Connected.
    SQL> alter system set job_queue_processes = 1000;
    
    System altered.
    
    SQL> connect sysman/
    Enter password:
    Connected.
    SQL> exec emd_maintenance.submit_em_dbms_jobs;
    
    PL/SQL procedure successfully completed.
    
    SQL> commit;
    
    Commit complete.
    
    

    After this I bounced the OMS, but still kept getting the same error. And though it fixed the scheduler jobs, I was now seeing the following error in the emoms.log:

    
    2015-05-18 22:29:09,573 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] WARN auth.EMRepLoginFilter doFilter.450 - InvalidEMUserException caught in EMRepLoginFilter: Failed to login using repository authentication for user: SSA_ADMIN
    oracle.sysman.emSDK.sec.auth.InvalidEMUserException: Failed to login using repository authentication for user: SSA_ADMIN
    
    

    So what I did was an update to the SYSMAN.MGMT_AUDIT_MASTER table and ran the procedure MGMT_AUDIT_ADMIN.ADD_AUDIT_PARTITION as was stated in document id 1493151.1:

    
    oracle $ sqlplus
    
     
    
    Enter user-name: sysman
    Enter password:
    
    SQL> update mgmt_audit_master set prepopulate_days=5 where prepopulate_days is null;
    
    1 rows updated.
    
    SQL> select count(1) from mgmt_audit_master where prepopulate_days is null;
    
    COUNT(1)
    ----------
    0
    
    SQL> exec mgmt_audit_admin.add_audit_partition;
    
    PL/SQL procedure successfully completed.
    
    SQL> commit;
    
    Commit complete.
    
    

    Once I did this, I was able to login with all my EM12c administrators without any issues:

    
    oracle@em12cr4.localdomain [emrep] /home/oracle
    oracle $ emcli login -username=ssa_admin
    Enter password
    
    Login successful
    
    

    Conclusion

    Even though the JOB_QUEUE_PROCESSES were not set to 0, it was the cause that it was failing, as it was a low value for this parameter. Thus, be careful when setting up this parameter, be sure to follow the latest installation guidelines.

    Note– This was originally published on rene-ace.com

    EM12c : Login to GUI with the correct password causes authentication failure

    So the other day I was trying to log in to my EM12c R4 environment with the SSA_ADMINISTRATOR user, and I got the error:

    “Authentication failed. If problem persists, contact your system administrator”

    I was quite sure that the password that I had was correct, so I tried with the SYSMAN user and had the same error. I still wanted to verify that I had the correct password , so I tried with the SYSMAN user to log in to the repository database, and was successful, so I know something was wrong there.

    
    SQL> connect sysman/
    Enter password:
    Connected.
    
    

    So I went to the/em/EMGC_OMS1/sysman/log/emoms.log and saw the following error

    
    2015-05-18 21:22:06,103 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR audit.AuditManager auditLog.368 - Could not Log audit data, Error:java.sql.SQLException: ORA-14400: inserted partition key does not map to any partition
    ORA-06512: at "SYSMAN.MGMT_AUDIT", line 492
    ORA-06512: at "SYSMAN.MGMT_AUDIT", line 406
    ORA-06512: at line 1
    
    

    Which led me to believe that the JOB_QUEUE_PROCESSES was set to 0, but it wasn’t the case, since it was set to 50. Though, this is actually an incorrect limit, so I bumped it up to 1000 and tried to rerun the EM12c repository DBMS Scheduler jobs as per the documentation in 1498456.1:

    
    SQL> show parameter JOB_QUEUE_PROCESSES
    
    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    job_queue_processes integer 50
    SQL> alter system set JOB_QUEUE_PROCESSES=1000 scope = both;
    
    System altered.
    
    SQL> show parameter both
    SQL> show parameter job
    
    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    job_queue_processes integer 1000
    SQL> connect / as sysdba
    Connected.
    SQL> alter system set job_queue_processes = 0;
    
    System altered.
    
    SQL> connect sysman/alyarog1605
    Connected.
    SQL> exec emd_maintenance.remove_em_dbms_jobs;
    
    PL/SQL procedure successfully completed.
    
    SQL> exec gc_interval_partition_mgr.partition_maintenance;
    
    PL/SQL procedure successfully completed.
    
    SQL> @$OMS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_recompile_invalid.sql SYSMAN
    old 11: AND owner = upper('&RECOMPILE_REPOS_USER')
    new 11: AND owner = upper('SYSMAN')
    old 26: dbms_utility.compile_schema(upper('&RECOMPILE_REPOS_USER'),FALSE);
    new 26: dbms_utility.compile_schema(upper('SYSMAN'),FALSE);
    old 41: WHERE owner = upper('&RECOMPILE_REPOS_USER')
    new 41: WHERE owner = upper('SYSMAN')
    old 84: AND owner = upper('&RECOMPILE_REPOS_USER')
    new 84: AND owner = upper('SYSMAN')
    old 104: AND ds.table_owner = upper('&RECOMPILE_REPOS_USER')
    new 104: AND ds.table_owner = upper('SYSMAN')
    
    PL/SQL procedure successfully completed.
    PL/SQL procedure successfully completed.
    
    SQL> connect / as sysdba
    Connected.
    SQL> alter system set job_queue_processes = 1000;
    
    System altered.
    
    SQL> connect sysman/
    Enter password:
    Connected.
    SQL> exec emd_maintenance.submit_em_dbms_jobs;
    
    PL/SQL procedure successfully completed.
    
    SQL> commit;
    
    Commit complete.
    
    

    After this I bounced the OMS, but still kept getting the same error. And though it fixed the scheduler jobs, I was now seeing the following error in the emoms.log:

    
    2015-05-18 22:29:09,573 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] WARN auth.EMRepLoginFilter doFilter.450 - InvalidEMUserException caught in EMRepLoginFilter: Failed to login using repository authentication for user: SSA_ADMIN
    oracle.sysman.emSDK.sec.auth.InvalidEMUserException: Failed to login using repository authentication for user: SSA_ADMIN
    
    

    So what I did was an update to the SYSMAN.MGMT_AUDIT_MASTER table and ran the procedure MGMT_AUDIT_ADMIN.ADD_AUDIT_PARTITION as was stated in document id 1493151.1:

    
    oracle $ sqlplus
    
     
    
    Enter user-name: sysman
    Enter password:
    
    SQL> update mgmt_audit_master set prepopulate_days=5 where prepopulate_days is null;
    
    1 rows updated.
    
    SQL> select count(1) from mgmt_audit_master where prepopulate_days is null;
    
    COUNT(1)
    ----------
    0
    
    SQL> exec mgmt_audit_admin.add_audit_partition;
    
    PL/SQL procedure successfully completed.
    
    SQL> commit;
    
    Commit complete.
    
    

    Once I did this, I was able to login with all my EM12c administrators without any issues:

    
    oracle@em12cr4.localdomain [emrep] /home/oracle
    oracle $ emcli login -username=ssa_admin
    Enter password
    
    Login successful
    
    

    Conclusion

    Even though the JOB_QUEUE_PROCESSES were not set to 0, it was the cause that it was failing, as it was a low value for this parameter. Thus, be careful when setting up this parameter, be sure to follow the latest installation guidelines.

    Note– This was originally published on rene-ace.com

    EM12c : Login to GUI with the correct password causes authentication failure

    So the other day I was trying to log in to my EM12c R4 environment with the SSA_ADMINISTRATOR user, and I got the error:

    “Authentication failed. If problem persists, contact your system administrator”

    I was quite sure that the password that I had was correct, so I tried with the SYSMAN user and had the same error. I still wanted to verify that I had the correct password , so I tried with the SYSMAN user to log in to the repository database, and was successful, so I know something was wrong there.

    
    SQL> connect sysman/
    Enter password:
    Connected.
    
    

    So I went to the/em/EMGC_OMS1/sysman/log/emoms.log and saw the following error

    
    2015-05-18 21:22:06,103 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR audit.AuditManager auditLog.368 - Could not Log audit data, Error:java.sql.SQLException: ORA-14400: inserted partition key does not map to any partition
    ORA-06512: at "SYSMAN.MGMT_AUDIT", line 492
    ORA-06512: at "SYSMAN.MGMT_AUDIT", line 406
    ORA-06512: at line 1
    
    

    Which led me to believe that the JOB_QUEUE_PROCESSES was set to 0, but it wasn’t the case, since it was set to 50. Though, this is actually an incorrect limit, so I bumped it up to 1000 and tried to rerun the EM12c repository DBMS Scheduler jobs as per the documentation in 1498456.1:

    
    SQL> show parameter JOB_QUEUE_PROCESSES
    
    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    job_queue_processes integer 50
    SQL> alter system set JOB_QUEUE_PROCESSES=1000 scope = both;
    
    System altered.
    
    SQL> show parameter both
    SQL> show parameter job
    
    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    job_queue_processes integer 1000
    SQL> connect / as sysdba
    Connected.
    SQL> alter system set job_queue_processes = 0;
    
    System altered.
    
    SQL> connect sysman/alyarog1605
    Connected.
    SQL> exec emd_maintenance.remove_em_dbms_jobs;
    
    PL/SQL procedure successfully completed.
    
    SQL> exec gc_interval_partition_mgr.partition_maintenance;
    
    PL/SQL procedure successfully completed.
    
    SQL> @$OMS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_recompile_invalid.sql SYSMAN
    old 11: AND owner = upper('&RECOMPILE_REPOS_USER')
    new 11: AND owner = upper('SYSMAN')
    old 26: dbms_utility.compile_schema(upper('&RECOMPILE_REPOS_USER'),FALSE);
    new 26: dbms_utility.compile_schema(upper('SYSMAN'),FALSE);
    old 41: WHERE owner = upper('&RECOMPILE_REPOS_USER')
    new 41: WHERE owner = upper('SYSMAN')
    old 84: AND owner = upper('&RECOMPILE_REPOS_USER')
    new 84: AND owner = upper('SYSMAN')
    old 104: AND ds.table_owner = upper('&RECOMPILE_REPOS_USER')
    new 104: AND ds.table_owner = upper('SYSMAN')
    
    PL/SQL procedure successfully completed.
    PL/SQL procedure successfully completed.
    
    SQL> connect / as sysdba
    Connected.
    SQL> alter system set job_queue_processes = 1000;
    
    System altered.
    
    SQL> connect sysman/
    Enter password:
    Connected.
    SQL> exec emd_maintenance.submit_em_dbms_jobs;
    
    PL/SQL procedure successfully completed.
    
    SQL> commit;
    
    Commit complete.
    
    

    After this I bounced the OMS, but still kept getting the same error. And though it fixed the scheduler jobs, I was now seeing the following error in the emoms.log:

    
    2015-05-18 22:29:09,573 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] WARN auth.EMRepLoginFilter doFilter.450 - InvalidEMUserException caught in EMRepLoginFilter: Failed to login using repository authentication for user: SSA_ADMIN
    oracle.sysman.emSDK.sec.auth.InvalidEMUserException: Failed to login using repository authentication for user: SSA_ADMIN
    
    

    So what I did was an update to the SYSMAN.MGMT_AUDIT_MASTER table and ran the procedure MGMT_AUDIT_ADMIN.ADD_AUDIT_PARTITION as was stated in document id 1493151.1:

    
    oracle $ sqlplus
    
     
    
    Enter user-name: sysman
    Enter password:
    
    SQL> update mgmt_audit_master set prepopulate_days=5 where prepopulate_days is null;
    
    1 rows updated.
    
    SQL> select count(1) from mgmt_audit_master where prepopulate_days is null;
    
    COUNT(1)
    ----------
    0
    
    SQL> exec mgmt_audit_admin.add_audit_partition;
    
    PL/SQL procedure successfully completed.
    
    SQL> commit;
    
    Commit complete.
    
    

    Once I did this, I was able to login with all my EM12c administrators without any issues:

    
    oracle@em12cr4.localdomain [emrep] /home/oracle
    oracle $ emcli login -username=ssa_admin
    Enter password
    
    Login successful
    
    

    Conclusion

    Even though the JOB_QUEUE_PROCESSES were not set to 0, it was the cause that it was failing, as it was a low value for this parameter. Thus, be careful when setting up this parameter, be sure to follow the latest installation guidelines.

    Note– This was originally published on rene-ace.com

    Understanding SQL

    From time to time someone publishes a query on the OTN database forum and asks how to make it go faster, and you look at it and think it’s a nice example to explain a couple of principles because it’s short, easy to understand, obvious what sort of things might be wrong, and easy to fix. Then, after you’ve made a couple of suggestions and explained a couple of ideas the provider simply fades into the distance and doesn’t tell you any more about the query, or whether they’ve taken advantage of your advice, or found some other way to address the problem.

    Such a query, with its execution plan, appeared a couple of weeks ago:

    UPDATE FACETS_CUSTOM.MMR_DTL
    SET
    	CAPITN_PRCS_IND = 2,
    	FIL_RUN_DT = Current_fil_run_dt,
    	ROW_UPDT_DT = dta_cltn_end_dttm
    WHERE
    	CAPITN_PRCS_IND = 5
    AND	HSPC_IND ='Y'
    AND	EXISTS (
    		SELECT	1
    		FROM	FACETS_STAGE.CRME_FUND_DTL_STG STG_CRME
    		WHERE	STG_CRME.MBR_CK = MMR_DTL.MBRSHP_CK
    		AND	MMR_DTL.PMT_MSA_STRT_DT BETWEEN STG_CRME.ERN_FROM_DT AND STG_CRME.ERN_THRU_DT
    		AND	STG_CRME.FUND_ID IN ('AAB1', '1AA2', '1BA2', 'AAB2', '1AA3', '1BA3', '1B80', '1A80')
    	)
    AND	EXISTS (
    		SELECT	1
    		FROM	FACETS_CUSTOM.FCTS_TMS_MBRID_XWLK XWLK
    		WHERE	XWLK.MBR_CK = MMR_DTL.MBRSHP_CK
    		AND	MMR_DTL.PMT_MSA_STRT_DT BETWEEN XWLK.HSPC_EVNT_EFF_DT AND XWLK.HSPC_EVNT_TERM_DT
    	)
    ;
    
    -------------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
    -------------------------------------------------------------------------------------------------------
    |   0 | UPDATE STATEMENT              |                       |     1 |   148 | 12431   (2)| 00:02:30 |
    |   1 |  UPDATE                       | MMR_DTL               |       |       |            |          |
    |   2 |   NESTED LOOPS SEMI           |                       |     1 |   148 | 12431   (2)| 00:02:30 |
    |*  3 |    HASH JOIN RIGHT SEMI       |                       |    49 |  5488 | 12375   (2)| 00:02:29 |
    |   4 |     TABLE ACCESS FULL         | FCTS_TMS_MBRID_XWLK   |  6494 | 64940 |    24   (0)| 00:00:01 |
    |*  5 |     TABLE ACCESS FULL         | MMR_DTL               |   304K|    29M| 12347   (2)| 00:02:29 |
    |*  6 |    TABLE ACCESS BY INDEX ROWID| CRME_FUND_DTL_STG     |     1 |    36 |     5   (0)| 00:00:01 |
    |*  7 |     INDEX RANGE SCAN          | IE1_CRME_FUND_DTL_STG |     8 |       |     1   (0)| 00:00:01 |
    -------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       3 - access("XWLK"."MBR_CK"="MMR_DTL"."MBRSHP_CK")
           filter("XWLK"."HSPC_EVNT_EFF_DT"<=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT") AND
                  "XWLK"."HSPC_EVNT_TERM_DT">=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT"))
       5 - filter("CAPITN_PRCS_IND"=5 AND "HSPC_IND"='Y')
       6 - filter(("STG_CRME"."FUND_ID"='1A80' OR "STG_CRME"."FUND_ID"='1AA2' OR
                  "STG_CRME"."FUND_ID"='1AA3' OR "STG_CRME"."FUND_ID"='1B80' OR "STG_CRME"."FUND_ID"='1BA2' OR
                  "STG_CRME"."FUND_ID"='1BA3' OR "STG_CRME"."FUND_ID"='AAB1' OR "STG_CRME"."FUND_ID"='AAB2') AND
                  "STG_CRME"."ERN_FROM_DT"<=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT") AND
                  "STG_CRME"."ERN_THRU_DT">=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT"))
       7 - access("STG_CRME"."MBR_CK"="MMR_DTL"."MBRSHP_CK")
    
    

    The most informative bit of narrative that went with this query said:

    “The table MMR_DTL doesnt have index on these columns CAPITN_PRCS_IND , HSPC_IND .. Since this is an update stmt which will update 85k records, In a dilema whether to index these columns or not .. And this table MMR_DTL is an ever growing table. Worried about the update performance. “

    This was in response an observation that there was a full tablescan on MMR_DTL at operation 5 despite the predicate “CAPITN_PRCS_IND”=5 AND “HSPC_IND”=’Y’. You’ll note that the predicted cardinality for that scan is 304K and the update statement is going to change CAPITN_PRCS_IND from the value 5 to the value 2 – so it’s not entirely unreasonable to be worried about the impact of creating an index that included the column capitn_prcs_ind.

    What more can we say about this query, given the limited information. Lots – unfortunately the owner of the query isn’t giving anything else away.

    I’m going to leave this note unfinished to give people a little chance to think about the clues in the request, the questions they might ask, reasons why there might be a performance problem, and strategies they might investigate, then I’ll update the posting with a few ideas some time in the next 24 hours.

    Update 1 – 24th May

    There are so many ideas that spring up from a small amount of information that it’s very hard to write a concise and coherent description of what you’ve noticed, when and how far you pursued it, and how relevant the ideas might be to the problem in hand – especially when most of the thoughts require you to ask for more information. Something has come up that has taken up the free time I had to write this note, so I’m just going to have to complete it in rapidly written installments. The first bit is an outline of the immediate response I had to the initial presentation of the problem and the execution plan that went with it.

    The only comment from the OP on this statement and plan was: “I couldnt optimize this query for better performance and optimized cost.. Can some one guide me on this.”

    We have no idea how many rows would be updated, how long it took, or how long the OP thinks it ought to take; it’s not until a subsequent post that we learn that the number of rows targetted for update is 85,000 – which tells us that the optimizer has run into some problems with its cardinality estimates. This suggests that IF there’s a serious performance problem then POSSIBLY there’s a better execution plan and we might get the optimizer to find it automatically if we could tell it how to adjust its cardinality estimates. It would be nice, however to know where the query spent it’s time (i.e. can we re-run it with rowsource execution stats or monitoring enabled, and see the actual run-time work in the plan).

    If it took a couple of minutes to update that 85,000 rows, I probably wouldn’t want to spend time making it go faster; if it took 2 hours, of which 1 hour 50 minutes was spent waiting for a transaction (row) lock then I’d want to look at why the update collision could happen and see if that problem could be avoided – it might then be the case that the last 10 minutes was spent rolling back and restarting an update that ought to have taken 2 minutes “in vacuo”. Underlying all this, I would want to be sure (as I’ve implicitly, and I think resonably, assumed) that it’s an update that runs only occasionally, perhaps once per day or once per week.

    In the absence of hard information – let’s resort to a few hypotheticals; looking at the plan itself (and knowing the target 85,000 rows) I am prepared to make a few guesses about the run-time activity.

    1. We build an inmemory hash table from the whole of FCTS_TMS_MBRID_XWLK, a step for which the optimizer ought to be able to give a reasonable cost and cardinality – assuming (as I will from now on) that the basic stats are reasonably accurate.
    2. We scan the (fairly large) MMR_DETAIL table, applying a couple of filters; again the optimizer ought to do a reasonable job of estimating the cost of such a table, and we might expect a significant fraction of the time to be spent on multiblock (possibly direct path) reads of the table. The cardinality reported is 304,000 but we note there are two predcicates and both are for columns which we might guess have a small number of distinct values – one of which we are changing. Perhaps there’s a bad cardinality error there and maybe a couple of single column histograms would help, but maybe column group stats with a histogram on the pair would be even better. I also wonder when (if) HSPC_IND ever changes from Y to N, and contemplate the possibility of creating a function-based index that records ONLY the rows that match this predicate pair (see the note on indexing that will appear some time over the next week further down the page). It’s at this point that we might ask whether the number of rows returned by this scan should be very similar to the number of rows updated, or whether the scan identifies far too many rows and the two existence tests do a lot of work to eliminate the excess and, if the latter, which test should we apply first and how should we apply it.
    3. Having scanned the MMR_DTL we probe the in-memory hash table copy of FCTS_TMS_MBRID_XWLK for the first match, using an equality predicate (which will be the access predicate) and a range-based (filter) predicate which looks as if it is checking that some “start date” is between an “effective date” and a “termination date”. The estimated size of the result set is FAR too small at 49 rows when we know we have to have at least 85,000 rows survive this test; moreover, this tiny estimate comes from inputs of 6,500 and 304,000 rows being joined so we ought to wonder how such a tiny estimate could appear. A possible explanation is that the application has used some extreme dates to represent NULL values. If that’s the case then it’s possible that suitable histograms might help the optimizer recognise the extreme distribution; alternatively virtual columns that change the extreme values back to NULL and a predicate referencing the virtual columns may help.
    4. After estimating the cardinality of the intermediate result so badly, the optimizer decides that the second existence test can be performed as a semi-join using a nested loop. The thing to note here is that the optimizer “knows” that this is an expensive step – the cost of each table access operation is 5 (4 + 1) – but it’s a step that shouldn’t happen very frequently so the cost is considered acceptable. We know, however, that this step has to execute at least 85,000 times, so the optimizer’s prediction of visiting 4 blocks in the table to identify (on average) 8 rows and discard (on average) 7 of them looks nasty. Again we note that one of the predicates is range-based on a pair of dates – and in this case we might wonder whether or not most of the rows we discard are outside the date range, and whether we ought to consider (as a general point, and not just for this query) whether or not we should add one, other, or both the ERN_FROM_DT and ERN_THRU_DAT to the IE1_CRME_FUND_DTL_STG index. It’s at this point in the query that we are most ignorant of time spent at present (and in the future as the MMR_DTL table grows) – on one hand it’s possible that the rows for each MMR_DTL are widely scattered across the CRME_FUND_DTL_STG and this step could do a lot of random I/O, on the other hand the (assumed) time-dependent nature of the data may mean that the only MMR_DTL rows we look at are recently entered and the associated CRME_FUND_DTL_STG rows are therefore also recently entered and closely clustered – leading to a beneficial “self-caching” effect at the “high” end of the table as the query runs, which introduces an efficiency that the optimizer won’t notice. There is one numerical highlight in this join – we have a cost of 5 for each probe and 49 rows to test, so we might expect the incremental cost of the query to be around 250, but the difference between operations 3 and 2 is only 56 – suggesting that the optimizer does have some “self-caching” information, possibly based on there being a significant difference between the two tables for the number of distinct values of the join column. (See, for example: http://oracle-randolf.blogspot.co.uk/2012/05/nested-loop-join-costing.html )

    Update 2 – 25th May

    Scalability is a question that should always be considered – and there’s a scalability threat in the information we have so far. The plan shows a full tablescan of the MMR_DTL table, and while tablescans are not necessarily a bad thing we’ve been told that: “this table MMR_DTL is an ever growing table“. It’s possible that Oracle can be very quick and efficient when doing the existence tests on the rows it selects from the table – but it is inevitable that the tablescan will take longer to complete as time passes; whether or not this is likely to matter is something we can’t decide from the information given: we don’t know how much of the time is the tablescan, we don’t know what fraction of the total time is due to the tablescan, and we don’t know  how much larger the table will grow each day.

    Another scalability detail we ought to ask about is the volume of data that we expect to update each time we run this statement. A time passes do we expect to see the same number of rows waiting to be updated, or are we expecting the business (whatever that may be) to grow steadily each month with an increase of a few percent in the number of rows to be updated on each execution. Our coding strategy may vary depending on the answer to that question – we might, for example, try to pre-empt a future problem by introducing some partitioning now.

    The final scalablility issue is one I’ve raised already and comes from the CRME_FUND_DTL_STG table. According to the plan there about 8 rows in this table for each distinct value of MMR_DTL.MBRSHP_CK; if MMR_DTL is large and growing, is CRME_FUND_DTL_STG very large and growing even faster – as time passes will there be more rows for each distinct value of MMR_DTL.MBRSHP_CK.  Answers to these questions will help us decide whether we should use a hash join or a nested loop in the join to this table, and how to index the table to minimise random I/O.

     

    To be continued …

     

    Guaranteed Method of Boosting your Oracle Skills

    I can tell you how to be a better Oracle DBA, Developer, Designer, Architect – whatever your flavour of role or aspect of profession, if Oracle tech is part of your working world I can hand you the key to improvement. And it is easy.

    I am totally assured(*) I can do this. It will work for every single one of you reading this post (except for you two, you know who you are). And you must send me $100 before I tell you how…

    Hell, no you don’t! This is not some bull-droppings selling piece, it is just advice. And some advice aimed directly at myself too.

    When did you last read the Oracle Server/Database Concepts manual? You know, that fairly short book (in fact, from 11G it is really short, with links through to other chapters in other books) that explains how Oracle TM (Copyright), actually does stuff? What it can do? It does not go into too many details but rather gives you a brief introduction of each concept and the fundamental “how it works” information.

    Thanks to Kevin Fries who pointed out it would be nice if I linked to the online concepts manuals (as he did in his comment):
    Here is the online 12C Database Concepts manual.
    Here is the online 11GR2 Database Concepts manual for those still with 11GR2.
    Here is the online 10GR2 Database Concepts manual for those trapped in the past with 10GR2.

    Read it. Read it this week. I am confident that if you read the latest Oracle Database Concepts manual you will be ahead of the game by a massive step.

    Oracle 7 instance diagram

    Oracle 7 instance diagram

    Why am I so sure? Because we forget what Oracle has been able to do for years and we miss new abilities if our day-job has not touched on them since they came in. I think there is a growing move to learning only what you really need to know to get the job done (as we are all so busy) as we know we can search the web for the rest. My friend Neil Chandler came up with a name for it, JIT learning: “Just In Time” learning). Only, you can’t easily search for what you don’t know (or have forgotten) is possible with the tech you use. If you go through the concepts manual you will be reminded of stuff you forgot, things you missed or {and this is key to newbies} gain an outline understanding of what Oracle can do.

    I became fascinated with how many people read the concepts manual about a decade ago and started asking a question when I presented on any Oracle topic. “Who has reads the concepts manual for the version of Oracle you mostly work with?”. In the last 10, 12 years the number of hands has decreased from well over 50%. In 2012, at a UK meeting, it hit the bottom of the barrel, no hands whatsoever. Oddly enough, a few weeks later I was in Slovenia (for none-European people, a lovely country bordering Italy and Austria – google it if you need more help) and the same question resulted in 40% of the audience raising a hand. When I was in the US 6 months later, no hands at all again. In the UK and Europe since, no hands or occasionally, one hand – and a few questions usually nailed down that it was a prior version of the manual they had read.

    I took some time to ask this question again at a UK user group meeting about 4 months ago (no hands came up of course) and asked “why?”. The consensus was “we know most of what is in the concepts manual, we just need to know what has changed” – with an undercurrent of not having time to read the now-huge set of Oracle manuals. A few people suggested just reading the New Features. This was a crowd who did not know what a table cluster was (“Ha, look at ME! I know what a table cluster is! Hahahahaaaa – OK, no one uses them.”). (British ironic self-depreciation there).

    Reading “New Features” is certainly better than nothing but I feel it is not enough as it does not remind us of the established stuff we have forgotten. I am on a bit of a personal Jihad to explain the basics of Oracle to any Newbie who cares to listen and I have to keep checking my facts with the concepts manual and some chosen expert friends (thank you VERY MUCH expert friends) and I keep stumbling over stuff I don’t know, misunderstood or forgot. And I have been an “expert” in Oracle since… Well, before the millennium bug or Android phones or iTunes had been invented. THAT LONG! I know my shit – and some of it is, well…. wrong.

    Actually, I have a confession. I have not read the 11g or 12C concepts manual. I told you this advice was aimed at me too.

    So, Go Read The Oracle 12C Concepts Manual. Go ON! Go and read it!!!! Oh. Still here? Well, I AM going to read it – as I last read the 10G concepts manual properly. And as part of my current push to present, talk and blog about the basics of Oracle, I will blog what jumps out at me. I know one thing – I will not be quiet any time until August if I follow my own advice, I will be posting blogs left, right and center about what I learn..

    I’ll use the tag 12Cbasics.

    Let the learning of basics begin.

    Oracle 7 or Oracle 8, 1.5

    Oracle 7 or Oracle 8, 1.5 ” of pure info

    Thanks to Joel Garry for digging his old manuals out the basement and doing this shot for me :-)

    (*) it won’t work if you already read the latest concepts manual. But most people have not! Heck, I did not charge for the advice, so sue me if you read it already.

    Log Buffer #423: A Carnival of the Vanities for DBAs

    This Log Buffer edition covers Oracle, SQL Server and MySQL blog posts from all over the blogosphere!


    Oracle:

    Hey DBAs:  You know you can  install and run Oracle Database 12c on different platforms, but if you install it on an Oracle Solaris 11 zone, you can take additional advantages.

    Here is a video with Oracle VP of Global Hardware Systems Harish Venkat talking with Aaron De Los Reyes, Deputy Director at Cognizant about his company’s explosive growth & how they managed business functions, applications, and supporting infrastructure for success.

    Oracle Unified Directory is an all-in-one directory solution with storage, proxy, synchronization and virtualization capabilities. While unifying the approach, it provides all the services required for high-performance enterprise and carrier-grade environments. Oracle Unified Directory ensures scalability to billions of entries. It is designed for ease of installation, elastic deployments, enterprise manageability, and effective monitoring.

    Understanding Flash: Summary – NAND Flash Is A Royal Pain In The …

    Extracting Oracle data & Generating JSON data file using ROracle.

    SQL Server:

    It is no good doing some or most of the aspects of SQL Server security right. You have to get them all right, because any effective penetration of your security is likely to spell disaster. If you fail in any of the ways that Robert Sheldon lists and describes, then you can’t assume that your data is secure, and things are likely to go horribly wrong.

    How does a column store index compare to a (traditional )row store index with regards to performance?

    Learn how to use the TOP clause in conjunction with the UPDATE, INSERT and DELETE statements.

    Did you know that scalar-valued, user-defined functions can be used in DEFAULT/CHECK CONSTRAINTs and computed columns?

    Tim Smith blogs as how to measure a behavioral streak with SQL Server, an important skill for determining ROI and extrapolating trends.

    Pilip Horan lets us know as How to run SSIS Project as a SQL Job.

    MySQL:

    Encryption is important component of secure environments. While being intangible, property security doesn’t get enough attention when it comes to describing various systems. “Encryption support” is often the most of details what you can get asking how secure the system is. Other important details are often omitted, but the devil in details as we know. In this post I will describe how we secure backup copies in TwinDB.

    The fsfreeze command, is used to suspend and resume access to a file system. This allows consistent snapshots to be taken of the filesystem. fsfreeze supports Ext3/4, ReiserFS, JFS and XFS.

    Shinguz: Controlling worldwide manufacturing plants with MySQL.

    MySQL 5.7.7 was recently released (it is the latest MySQL 5.7, and is the first “RC” or “Release Candidate” release of 5.7), and is available for download

    Upgrading Directly From MySQL 5.0 to 5.6 With mysqldump.

    One of the cool new features in 5.7 Release Candidate is Multi Source Replication.

     

    Learn more about Pythian’s expertise in Oracle , SQL Server and MySQL.