Are you at the PASS Summit this week?Â I wasn’t able to make it myself.Â But I can find some consolation in the fact that I just returned from a fantastic trip to Peru.Â Not a bad trade, eh?
Machu Picchu has long been a bucket list item, and my better half and I decided that this was the year.Â Since this was likely to be a once-in-a-lifetime trip, we decided to go for broke and do the 4-day classic trek on the Inca Trail to get there.
Our trip started with a couple of days in Cusco.Â The Inca Trail hike can be rigorous, to say the least, and it’s recommended to spend a few days at altitude to get acclimated before starting out.Â So the night of our arrival we arranged a visit to the Cusco Planetarium overlooking the city.Â We were given a presentation on the southern skies and their role in the Incan civilization, then looked at some amazing sights through their telescopes.Â The Cusco Planetarium is a small, family run operation, but they are obviously passionate about what they do and it was a great way to spend our first evening.
The next day, after some exploring around the city, we took a cooking class through Marcelo Batata.Â For me, this was the highlight of our time in Cusco.Â For several hours we learned about native fruits, veggies and grains of Peru and how Peruvian cuisine has been influenced by outside cultures.Â We prepared an appetizer course, Pisco cocktails, and a main course, all while being treated to small bites prepared by the chefs.Â Everything was absolutely delicious and this is another activity I would recommend to anyone visiting Cusco.Â Especially anyone who appreciates good food.
The Inca Trail
The day our trek began, we were up at 3:30am to meet our 4am bus that would take us to km 82, the official starting point of the classic Inca Trail hike. And there began a 4-day hike that I can honestly say was the most physically challenging experience of my life.Â I knew it would be tough.Â I’d read about other hikers’ experiences, watched videos on YouTube, and in my head I knew the distances and elevations.Â But reading about it didn’t fully prepare me for just how hard it was.Â On day 1 I was seriously wishing I’d spent more time on the treadmill in the weeks leading up to this trip.Â Really, it was just your average hike at this point, but I had underestimated the effect the altitude would have on my breathing and stamina.Â Add a day-pack with a couple liters of water and I was a bit winded at points.Â The day-2 hike to Dead Woman’s Pass, however, had me seriously questioning whether I had it in me.Â There were stretches where I would walk 20 steps or so and have to stop to catch my breath.Â Makes for a long morning.Â When you finally reach the top of the pass, however, all the fatigue suddenly disappears and is replaced by this incredible sense of accomplishment, and you’re ready to go again.Â Good thing too, because we still had several hours of hiking ahead of us that day.Â Day 3 was a piece of cake by comparison, we made it to our final campsite early enough to enjoy a siesta before a side-trip to some ruins.
Our day at Machu Picchu started at 3am.Â We dressed and ate a quick breakfast before lining up the a gate to wait until 5:30, when we would be allowed to start our hike to the Sun Gate.Â There are mixed opinions about whether it’s worth getting up so early to get into that line.Â On the one hand, everyone wants to be the first group to the Sun Gate and to Machu Picchu itself.Â Beat the crowds and there’s fewer people in your pictures, right?Â On the other hand, even if you’re the first group waiting at that gate, it doesn’t open until 5:30, and when it does open, everyone enters the trail at the same time.Â So, I’m not sure just how much of an advantage it gives you.Â Personally, I wasn’t keen on the idea of getting up so early just to wait, but when I caught my first glimpse of Machu Picchu, I didn’t mind so much.Â I really wish I had the words to describe it.Â Awesome comes to mind.Â It’s hard to believe something like that could be “lost” for hundreds of years.Â What makes it more amazing is that they’re still uncovering it.
Solo hikers are not allowed onto the trail, you have to sign up with an official group.Â After some shopping around, we ended up choosing Llama Path, based on online recommendations and reviews.Â They handle the Inca Trail entries, Machu Picchu tickets, as well as providing camping equipment, guides and porters, food and water.Â There were some minor hiccups, but all in all Llama Path is a fine company who took good care of us.Â I think I should say something in particular about the porters, because I have no idea how they do what they do.Â Each morning after we tourists left camp, they broke everything down, packed it up and carried it on their backs to the lunch site where they unpacked, cooked us lunch, then packed everything up and carried it to that night’s camp site.Â They would leave a site after us, arrive at the next site before us, and all while carrying huge packs of gear.
Going on trips like this as part of a group is a bit of a gamble, isn’t it?Â A good group of fellow travelers can make an already memorable experience that much more special.Â Get a bad group and, well, not so much.Â We were extremely lucky with our group.Â Every meal came with non-stop laughing, every leg of the trek was so much easier because of conversations with interesting people with great stories to tell.Â Whether it was the couple from the U.K. who were halfway through a 6-month trip across South America, the Australian boys who were heading to Key West next, or the couple from Canada who proved once and for all that Canadians are the nicest people ever,Â they all made me realize:Â I want to travel more.Â A big trip every couple of years isn’t enough.Â I want more!Â The only question is:Â where to next?
So I know it’s been a while since I last posted on this blog, but I promise I haven’t been totally slacking off.Â In fact, I’m proud to announce that I am now an officially published author (I do like the sound of that.Â Author).Â That’s right, 14 other first-time authors and I, with tons of help from Red Gate and the MidnightDBAs have come together to produce Tribal SQL, which will be officially launched at the PASS Summit next week.Â Woot!!!
How did it happen?Â Well, back in late 2011, Jen McCown posted a call for any previously unpublished SQL Server professionals interested in collaborating on a new book.Â The premise of the book was to share knowledge that all DBAs should know.Â So I submitted an abstract on the topic of auditing (shocker, I know), and my submission was accepted.Â Thus began what became a 2 year writing and editing process, and quite a learning experience.Â There were some quiet periods where I, to be honest, wondered whether it would really happen.Â Then, in a flurry of activity over the last 6 months or so, it all came together to a final product that I cannot wait to see.
15 first-time authors answer the question: What makes you passionate about working with SQL Server?
MidnightDBA and Red Gate partnered to produce a book filled with community, Tribal, knowledge on SQL Server. The resulting book is a series of chapters on lessons learned, perhaps the hard way, which you wonâ€™t find in traditional training or technical guidance material.
As a truly community-driven book, the authors are all generously donating 100% of their royalties to the charity Computers 4 Africa.
A DBAâ€™s core responsibilities are constant. A DBA must have the hard skills necessary to maintain and enforce security mechanisms on the data, prepare effectively for disaster recovery, ensure the performance and availability of all the databases in their care.
Side by side with these, our authors have also recognized the importance of communication skills to the business and their careers. We have chapters on the importance to a DBA of communicating clearly with their co-workers and business leaders, presenting data as useful information that the business can use to make decisions, and sound project management skills.
The resulting book, Tribal SQL, is a reflection of how a DBAâ€™s core and long-standing responsibilities and what it means to be a DBA in todayâ€™s businesses.
Computers 4 Africa
As mentioned above, all proceeds from the book sales are being donated to Computers 4 Africa, a non-profit that collects and refurbishes old computers for use in African schools, colleges, and communities.Â So not only will this book augment your SQL Server knowledge base, it will also help train the next generation of IT professionals.Â Not too shabby, eh?Â This book is truly 466 pages of awesome.
Not going to PASS?Â No problem!Â The book is also available on Amazon.Â And, you know, the holidays are fast approaching.Â I know what all my family and friends are getting this year…
Once upon a time, in a blog post far, far away, I started talking about auditing in SQL Server.Â And I told you all about how to use SQL Audit to monitor what’s going on in your databases.Â Remember that?Â If you do, you might also recall that I mentioned there being more than one way to audit SQL Server.Â Well, it’s been a while, but I’m here to pick up that thread, and the next method I want to tell you about is what I consider to be the most under-appreciated features of SQL Server: Event Notifications.
Event Notifications were first introduced in SQL 2005, and, unlike some other features I could mention (I’m looking at you SQL Audit), it’s available inÂ all editions.Â So you don’t need to shell out beaucoup bucks to audit your instances.Â You just need to do a little TSQL coding (nothing too scary, I promise).Â I should mention here that Event Notifications are based on the SQL Trace architecture, and if you’ve been paying attention you’ll know that SQL Trace has been deprecated.Â So the future of Event Notifications is a bit cloudy at the moment.Â I really hope MS finds a way to keep it, because there’s no other feature that can take its place at this time.Â So if you’re listening, Microsoft bigwigs, here’s my plea:Â keep Event Notifications!!! Please?
Why I like Event Notifications
Why all the fuss?Â Well, Event Notifications are kind of like SQL Trace and DDL/Logon Triggers had a baby and that kid got the best part of both parents.Â Like SQL Trace, Event Notifications work asynchronously, meaning outside the scope of the transaction that caused the event.Â This means that the event notification’s work doesn’t use the resources that transaction was using, and more importantly, it won’t impact that transaction if something goes horribly awry (think errors, blocking, etc.).Â Unfortunately, this asynchronous-ness has its price.Â Because it’s working outside the scope of the transaction, the event notification can’t be rolled back if the firing transaction rolls back.Â And along those same lines, an event notification can’t roll back the firing event, like a trigger could.Â (So if you’re looking for something that will prevent events from happening, Event Notifications aren’t the answer.)
Like triggers, however, Event Notifications can do more than just record an event, they can respond to it.Â We’ll go more into this next time when I talk about how they work, but let’s just say that, since Event Notifications work hand in hand with Service Broker, they can be used to perform actions.Â What kind of actions?Â They can insert event information into a table, obviously, so no more messing with multiple trace files (whoohoo!).Â But they can also do things like send an email.Â Want to know the moment one of your developers modifies a stored procedure?Â Event Notifications can do that.
Because they use the SQL Trace architecture, Event Notifications are very low impact.Â They do incur some overhead due to their use of XML, but this is minimal and shouldn’t be noticeable.
What can Event Notifications audit?
It might be easier to ask that they can’t audit.Â Really, what events you can audit will vary based on the scope of the event notification.Â You can define it at the SERVER or DATABASE level, and obviously certain events only make sense at a certain scope, but other events are available at both scopes.
You can query the sys.event_notification_event_types DMV to see a full list of all events and event groups, but in a nutshell, you can use Event Notifications to audit:
- all DDL events – Things like CREATE TABLE, ALTER PROCEDURE, etc. are obvious candidates, but you can also audit CREATE STATISTICS to monitor SQL Server’s creation of auto stats, or what about linked server modifications using the ALTER_LINKED_SERVER event?
- some trace events – How about monitoring when a query is missing a join predicate or missing column stats?Â What about auditing data or log file auto growth?Â That might be information worth knowing about.
- security events – Monitor failed logins, or all logins, as needed.
- DML events – Not too many people know this, but you can also use Event Notifications to monitor object access with the AUDIT_SCHEMA_OBJECT_ACCESS_EVENT.Â Like SQL Audit, this event is monitored at the time of the permission check, so you can audit not only successful attempts at access, but unsuccessful attempts, too.Â It’s worth noting that this event is only available at the SERVER scope.Â Which means it will fire forÂ every object access event in the instance.
What can’t you audit with Event Notifications?Â Temporary objects.Â They won’t fire for local or global temporary tables or temporary stored procedures.Â So no monitoring of TempDB usage here.
So what’s next?
That’s a basic overview of Event Notifications.Â In the next post, I’ll go into how they work and creating a basic event notification. Stay tuned…
Last week I was fortunate enough to attend SQLSkills IE1 class in Tampa. Fortunate to have an employer willing to send me to that kind of training, but also because Tampa in February isÂ way better than Cleveland in February. Not that I really got to enjoy the sunshine and warmth much, because let me tell you, they call it an “immersion event” for a reason. 8+ hours of intense SQL server training, usually complemented with a couple more hours of SQL-related activities in the evening. It makes for 5 very long days, but it was so worth it.
A week of SQL Server
We started out the week with Paul (b | t) giving us a solid foundation of database structures and datafile internals. From the structure of a record, to how it’s placed on a page, allocation bitmaps, and compression, with demos using DBCC IND and DBCC PAGE. It’s very dense stuff, and to be honest, for me this isn’t the most exciting topic, but it’s important for truly understanding how SQL works. After that we moved on to datafile internals, talking about physical layout, storage considerations, file maintenance and tempdb. After that Kimberly (b | t) closed out the day talking about locking and blocking, how data modifications work under the covers, transactions and savepoints.
That was just day 1. On day 2 Kimberly continued with more on locking, then went into a discussion on isolation, focusing on snapshot isolation and how that works internally. Paul then covered logging and recovery, VLFs, how transactions are logged and rolled back, and the internals of a checkpoint. Another intense day.
Day 3 focused primarily on indexing and data access. Kimberly started out by explaining table and index structures, making sure we understood the importance of a good clustering key and its impact on performance and maintenance. From there she segued into data access internals: the tipping point for index usage, the benefits of covering indexes and filtered indexes. Paul closed out the day discussing the ins and outs of index fragmentation.
Thursday’s topic du jour? Statistics. And let me tell you: Kimberly loves to talk about statistics. She promised us in the beginning of the week that even if we expected the stats module to be the driest of the class, we would change our minds by the time she finished. And she was right. Stats are pretty darn interesting, and having a good understanding of how they’re gathered and used, both by SQL and byÂ you, is critical to good performance. That’s one module I’ll be reviewing soon.
We closed out the week learning about indexing strategies, table design, and partitioning. My only “complaint” about the whole week was that I wish we’d had more time on partitioning. Though, Kimberly did acknowledge that there wasn’t enough time in this class to do partitioning justice and they’re talking about an IE5 that covers it more in depth. What we did cover, however, gave me ideas on how both partitioned tables and partitioned views could be used on large tables.
It’s a lot of information being thrown at you in 5 days, and even though Paul and Kimberly do a great job of presenting it in a very easy to understand manner, you need to keep in mind that you’re not going to absorb it all in one week. While I would have loved to stay for IE2, which is more directly applicable to my current job, I’m actually glad to have a chance for everything from this week to firm up in my head. That way when I do take IE2, I go in with a solid foundation.
Some tips if you’re planning on attending:
- Throughout the week I kept a separate list of resources I wanted to check out further once the class was over. I didn’t want them buried amongst the other module notes.
- Get plenty of sleep. This isn’t a conference! You’ll want to be rested to make the most of the class.
- You won’t “get” everything they cover during the day, so you’ll want to review the materials before the next day. That way you can ask questions if something’s still not clear. Personally, I found it more effective to get up a little earlier and review in the morning when I was fresh.
- Ask questions. There are no stupid questions. If something doesn’t make sense to you, ask!
- Disconnect at much as you can.Â Â I realize that you’ll probably need to keep in touch with your job, but try to limit it to breaks and off hours.
Is it worth it?
Do you know what thought kept popping into my head throughout the week? “I wish I’d known that at my last job.” There were so my scenarios and problems that Paul and Kimberly talked about that I’d seen on a regular basis. And had I had this training then, I could have addressed them so much better.Â That’s OK though, from here forward I’ll be able to work with SQL Server more effectively.
Is it expensive? Compared to other classes you could probably take locally, sure. Especially when you add in travel costs, since it’s not likely for the majority of us that these events will happen in our home town. But you’ll never get this level of training from one of those local classes. You just won’t.
We talk about training a lot, about whose responsibility it is: ours or our employer’s. I’m not going to debate that now, but I will say this: you don’t ask, you don’t get. Ask your manager. Make your argument as best you can. And if he/she still says no, find a way to send yourself. You won’t regret a single penny. And to you managers out there: absolutely send your DBAs, but send your developers, too. This isn’t a class just for admins. Developers will also benefit from a solid understanding of how SQL Server works.
Last week I attended the Northeast Ohio Oracle Users Group’s (NEOOUG) first meeting of 2013.Â The main topic of the day was the upcoming release of Oracle 12c.Â The “c” stands for “cloud”, and the focus is on making private cloud environments easier to deploy and manage.
Up til now in Oracle, every database required its own memory structures and background processes.Â You could have multiple schemas using the same database/instance, but if you wanted to have separate databases, you needed to spin up a new instance.Â This meant managing memory, resources, patching, etc. between two separate environments.Â Spin up a couple more, and you can see where the management nightmare begins.Â Well, with version 12c, Oracle is introducing the concept of pluggable databases (PDBs).Â The idea is that you install a single container database (CDB), which manages the memory and background processes, and then you “plug in” user databases (the PDBs) into the CDB.Â All of the PDBs share the same pool of memory and processes.Â So instead of multiple instances, each with a single database, you now have a single instance housing multiple databases.Â Hmm, where have I heard that before…
Anyway, in addition to making resource management easier, it also makes patching easier, since you patch the container database and the patch is applied to all of the plugged in databases.Â The obvious question then was:Â what if you have an app that can’t be patched, like your ERP system?Â In that case, you can spin up a new container database, unplug the ERP database from the first container and plug it into the new container.
There were still some unanswered questions surrounding the whole multitenancy concept.Â Could you plug a PDB into a lower version CDB?Â Does each PDB have its own redo logs, control files, UNDO and TEMP spaces, etc.Â Details like that aren’t clear yet, or at least they weren’t clear at the meeting.Â What is known is that you’ll get one CDB and one PDB out of the box.Â Additional PDBs are a licensed feature ($).
But wait, there’s more!
While the pluggable database change was the big news of the afternoon, there are other new features in 12c that are also worth noting.
- Data Guard Far Sync – introduces the ability to asynchronously replicate your database to another geographically separate datacenter through the use of a Far Sync database.Â The Far Sync database is a stripped down database composed of only control files, redo logs, and enough data space to house the redo data being sent to the remote standby database.Â The idea is that the primary database synchronously sends redo data to the Far Sync database.Â The Far Sync compresses the redo data and sends it asynchronously to the standby database.
- Information Lifecycle Management – tracks extent or block-level statistics on read and update activity.Â This information can then be used to create a heat map of how data is being utilized (or not utilized) to better plan how to treat that data when it comes to storage.Â Business rules can also be put in place to automate partition compression or movement based on usage.Â It’s also smart enough to exclude DDL, statistics maintenance, and reads that result from full table scans.
- Data Redaction – dynamic data masking for queries based on rules created in the database.Â I thought this one was pretty neat, since previously you had to code the redaction of sensitive data at the application level.Â But this obviously left you open to other query tools.Â Now the database handles the redaction, based on business rules you create.
And that’s all, folks.Â If you’re interested in becoming more involved in the Oracle community here in the Cleveland area, check out the NEOOUG’s web site and consider becoming a member.