Once upon a time, in a blog post far, far away, I started talking about auditing in SQL Server.Â And I told you all about how to use SQL Audit to monitor what’s going on in your databases.Â Remember that?Â If you do, you might also recall that I mentioned there being more than one way to audit SQL Server.Â Well, it’s been a while, but I’m here to pick up that thread, and the next method I want to tell you about is what I consider to be the most under-appreciated features of SQL Server: Event Notifications.
Event Notifications were first introduced in SQL 2005, and, unlike some other features I could mention (I’m looking at you SQL Audit), it’s available inÂ all editions.Â So you don’t need to shell out beaucoup bucks to audit your instances.Â You just need to do a little TSQL coding (nothing too scary, I promise).Â I should mention here that Event Notifications are based on the SQL Trace architecture, and if you’ve been paying attention you’ll know that SQL Trace has been deprecated.Â So the future of Event Notifications is a bit cloudy at the moment.Â I really hope MS finds a way to keep it, because there’s no other feature that can take its place at this time.Â So if you’re listening, Microsoft bigwigs, here’s my plea:Â keep Event Notifications!!! Please?
Why I like Event Notifications
Why all the fuss?Â Well, Event Notifications are kind of like SQL Trace and DDL/Logon Triggers had a baby and that kid got the best part of both parents.Â Like SQL Trace, Event Notifications work asynchronously, meaning outside the scope of the transaction that caused the event.Â This means that the event notification’s work doesn’t use the resources that transaction was using, and more importantly, it won’t impact that transaction if something goes horribly awry (think errors, blocking, etc.).Â Unfortunately, this asynchronous-ness has its price.Â Because it’s working outside the scope of the transaction, the event notification can’t be rolled back if the firing transaction rolls back.Â And along those same lines, an event notification can’t roll back the firing event, like a trigger could.Â (So if you’re looking for something that will prevent events from happening, Event Notifications aren’t the answer.)
Like triggers, however, Event Notifications can do more than just record an event, they can respond to it.Â We’ll go more into this next time when I talk about how they work, but let’s just say that, since Event Notifications work hand in hand with Service Broker, they can be used to perform actions.Â What kind of actions?Â They can insert event information into a table, obviously, so no more messing with multiple trace files (whoohoo!).Â But they can also do things like send an email.Â Want to know the moment one of your developers modifies a stored procedure?Â Event Notifications can do that.
Because they use the SQL Trace architecture, Event Notifications are very low impact.Â They do incur some overhead due to their use of XML, but this is minimal and shouldn’t be noticeable.
What can Event Notifications audit?
It might be easier to ask that they can’t audit.Â Really, what events you can audit will vary based on the scope of the event notification.Â You can define it at the SERVER or DATABASE level, and obviously certain events only make sense at a certain scope, but other events are available at both scopes.
You can query the sys.event_notification_event_types DMV to see a full list of all events and event groups, but in a nutshell, you can use Event Notifications to audit:
- all DDL events – Things like CREATE TABLE, ALTER PROCEDURE, etc. are obvious candidates, but you can also audit CREATE STATISTICS to monitor SQL Server’s creation of auto stats, or what about linked server modifications using the ALTER_LINKED_SERVER event?
- some trace events – How about monitoring when a query is missing a join predicate or missing column stats?Â What about auditing data or log file auto growth?Â That might be information worth knowing about.
- security events – Monitor failed logins, or all logins, as needed.
- DML events – Not too many people know this, but you can also use Event Notifications to monitor object access with the AUDIT_SCHEMA_OBJECT_ACCESS_EVENT.Â Like SQL Audit, this event is monitored at the time of the permission check, so you can audit not only successful attempts at access, but unsuccessful attempts, too.Â It’s worth noting that this event is only available at the SERVER scope.Â Which means it will fire forÂ every object access event in the instance.
What can’t you audit with Event Notifications?Â Temporary objects.Â They won’t fire for local or global temporary tables or temporary stored procedures.Â So no monitoring of TempDB usage here.
So what’s next?
That’s a basic overview of Event Notifications.Â In the next post, I’ll go into how they work and creating a basic event notification. Stay tuned…
Last week I was fortunate enough to attend SQLSkills IE1 class in Tampa. Fortunate to have an employer willing to send me to that kind of training, but also because Tampa in February isÂ way better than Cleveland in February. Not that I really got to enjoy the sunshine and warmth much, because let me tell you, they call it an “immersion event” for a reason. 8+ hours of intense SQL server training, usually complemented with a couple more hours of SQL-related activities in the evening. It makes for 5 very long days, but it was so worth it.
A week of SQL Server
We started out the week with Paul (b | t) giving us a solid foundation of database structures and datafile internals. From the structure of a record, to how it’s placed on a page, allocation bitmaps, and compression, with demos using DBCC IND and DBCC PAGE. It’s very dense stuff, and to be honest, for me this isn’t the most exciting topic, but it’s important for truly understanding how SQL works. After that we moved on to datafile internals, talking about physical layout, storage considerations, file maintenance and tempdb. After that Kimberly (b | t) closed out the day talking about locking and blocking, how data modifications work under the covers, transactions and savepoints.
That was just day 1. On day 2 Kimberly continued with more on locking, then went into a discussion on isolation, focusing on snapshot isolation and how that works internally. Paul then covered logging and recovery, VLFs, how transactions are logged and rolled back, and the internals of a checkpoint. Another intense day.
Day 3 focused primarily on indexing and data access. Kimberly started out by explaining table and index structures, making sure we understood the importance of a good clustering key and its impact on performance and maintenance. From there she segued into data access internals: the tipping point for index usage, the benefits of covering indexes and filtered indexes. Paul closed out the day discussing the ins and outs of index fragmentation.
Thursday’s topic du jour? Statistics. And let me tell you: Kimberly loves to talk about statistics. She promised us in the beginning of the week that even if we expected the stats module to be the driest of the class, we would change our minds by the time she finished. And she was right. Stats are pretty darn interesting, and having a good understanding of how they’re gathered and used, both by SQL and byÂ you, is critical to good performance. That’s one module I’ll be reviewing soon.
We closed out the week learning about indexing strategies, table design, and partitioning. My only “complaint” about the whole week was that I wish we’d had more time on partitioning. Though, Kimberly did acknowledge that there wasn’t enough time in this class to do partitioning justice and they’re talking about an IE5 that covers it more in depth. What we did cover, however, gave me ideas on how both partitioned tables and partitioned views could be used on large tables.
It’s a lot of information being thrown at you in 5 days, and even though Paul and Kimberly do a great job of presenting it in a very easy to understand manner, you need to keep in mind that you’re not going to absorb it all in one week. While I would have loved to stay for IE2, which is more directly applicable to my current job, I’m actually glad to have a chance for everything from this week to firm up in my head. That way when I do take IE2, I go in with a solid foundation.
Some tips if you’re planning on attending:
- Throughout the week I kept a separate list of resources I wanted to check out further once the class was over. I didn’t want them buried amongst the other module notes.
- Get plenty of sleep. This isn’t a conference! You’ll want to be rested to make the most of the class.
- You won’t “get” everything they cover during the day, so you’ll want to review the materials before the next day. That way you can ask questions if something’s still not clear. Personally, I found it more effective to get up a little earlier and review in the morning when I was fresh.
- Ask questions. There are no stupid questions. If something doesn’t make sense to you, ask!
- Disconnect at much as you can.Â Â I realize that you’ll probably need to keep in touch with your job, but try to limit it to breaks and off hours.
Is it worth it?
Do you know what thought kept popping into my head throughout the week? “I wish I’d known that at my last job.” There were so my scenarios and problems that Paul and Kimberly talked about that I’d seen on a regular basis. And had I had this training then, I could have addressed them so much better.Â That’s OK though, from here forward I’ll be able to work with SQL Server more effectively.
Is it expensive? Compared to other classes you could probably take locally, sure. Especially when you add in travel costs, since it’s not likely for the majority of us that these events will happen in our home town. But you’ll never get this level of training from one of those local classes. You just won’t.
We talk about training a lot, about whose responsibility it is: ours or our employer’s. I’m not going to debate that now, but I will say this: you don’t ask, you don’t get. Ask your manager. Make your argument as best you can. And if he/she still says no, find a way to send yourself. You won’t regret a single penny. And to you managers out there: absolutely send your DBAs, but send your developers, too. This isn’t a class just for admins. Developers will also benefit from a solid understanding of how SQL Server works.
Last week I attended the Northeast Ohio Oracle Users Group’s (NEOOUG) first meeting of 2013.Â The main topic of the day was the upcoming release of Oracle 12c.Â The “c” stands for “cloud”, and the focus is on making private cloud environments easier to deploy and manage.
Up til now in Oracle, every database required its own memory structures and background processes.Â You could have multiple schemas using the same database/instance, but if you wanted to have separate databases, you needed to spin up a new instance.Â This meant managing memory, resources, patching, etc. between two separate environments.Â Spin up a couple more, and you can see where the management nightmare begins.Â Well, with version 12c, Oracle is introducing the concept of pluggable databases (PDBs).Â The idea is that you install a single container database (CDB), which manages the memory and background processes, and then you “plug in” user databases (the PDBs) into the CDB.Â All of the PDBs share the same pool of memory and processes.Â So instead of multiple instances, each with a single database, you now have a single instance housing multiple databases.Â Hmm, where have I heard that before…
Anyway, in addition to making resource management easier, it also makes patching easier, since you patch the container database and the patch is applied to all of the plugged in databases.Â The obvious question then was:Â what if you have an app that can’t be patched, like your ERP system?Â In that case, you can spin up a new container database, unplug the ERP database from the first container and plug it into the new container.
There were still some unanswered questions surrounding the whole multitenancy concept.Â Could you plug a PDB into a lower version CDB?Â Does each PDB have its own redo logs, control files, UNDO and TEMP spaces, etc.Â Details like that aren’t clear yet, or at least they weren’t clear at the meeting.Â What is known is that you’ll get one CDB and one PDB out of the box.Â Additional PDBs are a licensed feature ($).
But wait, there’s more!
While the pluggable database change was the big news of the afternoon, there are other new features in 12c that are also worth noting.
- Data Guard Far Sync – introduces the ability to asynchronously replicate your database to another geographically separate datacenter through the use of a Far Sync database.Â The Far Sync database is a stripped down database composed of only control files, redo logs, and enough data space to house the redo data being sent to the remote standby database.Â The idea is that the primary database synchronously sends redo data to the Far Sync database.Â The Far Sync compresses the redo data and sends it asynchronously to the standby database.
- Information Lifecycle Management – tracks extent or block-level statistics on read and update activity.Â This information can then be used to create a heat map of how data is being utilized (or not utilized) to better plan how to treat that data when it comes to storage.Â Business rules can also be put in place to automate partition compression or movement based on usage.Â It’s also smart enough to exclude DDL, statistics maintenance, and reads that result from full table scans.
- Data Redaction – dynamic data masking for queries based on rules created in the database.Â I thought this one was pretty neat, since previously you had to code the redaction of sensitive data at the application level.Â But this obviously left you open to other query tools.Â Now the database handles the redaction, based on business rules you create.
And that’s all, folks.Â If you’re interested in becoming more involved in the Oracle community here in the Cleveland area, check out the NEOOUG’s web site and consider becoming a member.
Last month I mentioned that I’d like to start including Oracle stuff on this blog.Â I’m getting back into that world again as part of the new job, and this blog has always been, first and foremost, a way for me to document and share what I’m learning.Â Now I realize that a lot of you might not be too interested in the Oracle side of things, so I’m offering some new subscription options.Â In addition to the main Cleveland DBA RSS feed, I now have
Of course, you can still stick with the main feed you’re using now and get it all.
So, you may have noticed that things have been a little quiet around here of late.Â Sorry about that, I’ve kind of been dropping the ball when it comes to blogging.
So where have I been?Â Well, back in October I left my job of 12 years to move on to a new challenge.Â Why did I finally decide to leave after 12 years?Â Well, when I started at that firm, I had no DBA experience.Â I had done some tech support for Informix, mostly for their I4GL and New Era programming languages, and I’d done some development in I4GL, but I wanted to move into database administration.Â The firm was looking for someone with DBA experience with Informix, as well as knowledge of the I4GL language.Â They took a chance on me and it worked out well for both of us.Â Informix was eventually phased out in favor of Oracle, and then SQL Server.Â Each new phase gave me an opportunity to expand my skill set.Â It kept me challenged.
The thing about working at one place for so long, especially a large shop, is that you really only get exposed to one way of doing things.Â And while I appreciate everything I learned during my time there, I knew the only way to push my skillset to the next level was to move on.Â So when this opportunity presented itself, I jumped on it.
I won’t go into details here regarding my new employer, but suffice it to say they’ve been keeping me busy.Â I’ve already learned a lot of new stuff, and I’ve been brushing up on my somewhat rusty Oracle skills.Â Since I use this blog in part as a way to record what I’ve learned for later reference, as well as to solidify it in my head, I’ve been thinking of how I can incorporate the Oracle stuff.Â I know a lot of us out there support both Oracle and SQL Server, and there are quite a few reluctant “accidental Oracle DBAs” that may find the information useful, too.Â And there just doesn’t seem to be the same number of good Oracle blogs out there like there is for SQL Server.Â Maybe a separate blog is the way to go, or perhaps a separate page.Â I’d prefer to keep it all on the same domain.Â Perhaps I’m overthinking it, maybe it should all be mixed together and if you aren’t interested in the Oracle stuff, don’t read it.Â I’d just need to update my syndication feeds.
And then there’s life.Â My youngest brother got married and we had a fun trip up to Buffalo to celebrate with the family.Â Two of my best friends moved across country, and there were numerous social activities in order to squeeze every last drop of fun out of them before they left.Â Last week we spent an evening at the Severance Hall where they showed Charlie Chaplin’s Modern Times while the Cleveland Orchestra played the original score.Â As much as I love old movies, I’d never seen any Charlie Chaplin films before.Â It was so much fun.Â We splurged on box seats, so now I’ll be spoiled for anything else.
The holiday season buys me some slack, I’m sure.Â After all, I did bake 22 dozen cookies this weekend (8 dozen rugelach, 8 dozen pecan tassies, and 6 dozen sugar cookies, if you’re wondering).Â Everyone is busy this time of year and I’m seeing a lot of blogs “go dark” until the new year.Â So I’ll probably be quiet a bit longer, but hopefully you’ll still be here when I return in January.