Posts

Working with PowerShell & Multiple Azure Contexts

When working with multiple Azure subscriptions, the PowerShell Az.* modules allow for easy context switching. This means that you can run commands agains multiple subscriptions, or you can run commands against subscriptions without changing your default context. An Azure Context object contains information about the Account that was used to sign into Azure, the active (for that context) Azure Subscription, and an auth token cache is not actually empty, it just can't read from here for security reasons, though you can read it with the Get-AzAccessToken command. Here's what is in an Azure Context object: PS> Get-AzContext | fl * Name : TK-PRD (yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy) - tim@timkennedy.net Account : tim@timkennedy.net Environment : AzureCloud Subscription : yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy Tenant : zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz TokenCache : VersionProfile : ExtendedProperties : {} If y...

Building DBD::mysql on Solaris 10 Sparc

Having problems building the Perl DBD::mysql modules on Solaris 10 Sparc 64-bit? The Perl 5.8.4 binary that ships with Solaris 10 is a 32-bit application.  You are probably running the 64-bit version of MySQL and trying to build DBD::mysql against that db version. What you actually need to do is download the 32-bit version of MySQL, for linking the Perl DBD::mysql libraries against.   I run the 64-bit MySQL database in /opt/mysql/mysql, so I unpacked the 32-bit MySQL as /opt/mysql/mysql32. Then, run a CPAN shell, look DBD::mysql, and build the module. /usr/perl5/5.8.4/bin/perlgcc Makefile.PL --libs '-R/usr/sfw/lib \ -R/opt/mysql/mysql32/lib -L/usr/sfw/lib -L/opt/mysql/mysql32/lib \ -lmysqlclient -lz -lposix4 -lcrypt -lgen -lsocket -lnsl -lm' \ --cflags '-I/usr/sfw/include -I/usr/include -I/opt/mysql/mysql32/include' Then gmake install UNINST=1 and you're done.

logging shell commands to syslog on secure systems

I had recently come across a blog post describing methods for capturing commands entered on the command line, and recording them to syslog.  Either by function() or by patching the actual shell itself.   I found this article because I was asked by my boss to find a way to add CLI logging to some hosts on our network, to support audits and accountability. Some of the environments I work on are more secure than usual.  In a typical corporate environment, whether internet connected or not, there is generally no need or requirements to use system auditing to track all user actions.  Some government systems, whether classified or not, do require this, and some commercial systems in regulated industries, or who service government agencies, also require this level of auditing and accountability.  In some cases it can be a smart idea for non-regulated systemd.  For instance, if you're a managed services company that uses a team of operators to ma...

A generic perl script to scan a CIDR subnet for listeners on a specific port.

Ever had a customer ask you where *some process* was running on *some port* in their network? I have. And usually this involves an environment that doesn't have NMAP installed, or any other common port scanning tools. Fortunately these days, almost every *nix OS comes with Perl, even Solaris. Since I work for a managed services company, and we manage a multitude of different environments, each with it's own set of restrictions and requirements, I try to write the most portable code that I can, so that it has the best chance of actually working in any given environment. This script uses a couple of standard Perl modules that are included as part of the default installation, and don't require any CPAN-Fu, and it takes a couple of options, such as a switch for verbosity, and IP address, with or wirhout a CIDR mask, and a TCP port. The CIDR mask defaults to /32, and the port defaults to 22. Here's an example of the output. tcsh-[101]$ ./scan-port.pl 208.64.63.39/30 ...

diskread: reading beyond end of ramdisk (& how I recovered)

We had to do a maintenance to replace a NEM module in a Sun Blade 8000 Modular System. Two of my team mates went on down to the datacenter on other business and graciously offered to SWAP the NEM for me. The pulled the old one out, stuck the new one in. That's as simple as it should have been. Should have been. I wish. Instead, the chassis started to freak out, cycling it's power over and over, and somehow was taking the CMM with it. In between one set of cycles, I was able to connect to the CMM via console and paste in a bunch of commands to shut down chassis power. I let it sit for a moment, then began to power up the system. First the chassis, then the individual blades. One blade came up, no problem. The next two, though, were very much less than happy, spitting out errors like: diskread: reading beyond end of ramdisk start = 0x2000, size = 0x2000 failed to read superblock diskread: reading beyond end of ramdisk start = 0x2000, size = 0x2000 failed to read superblock ...

OUCHIES! I broke my big toe this morning!

Image
I broke my big toe. I went to the hospital and had them X-RAY it. It's broke! I was carrying my son (15 months old) down the steps, and I slipped. My only though was "Don't let Jason get hurt." So I grabbed him and wrapped my arms around him, as my left foot missed a step, and my right foot slipped off the step it was on, hitting the step below that toe first, which toes consequently folded underneath that foot at the same time as they became the primary weight bearers for all 250 lbs of me. Jason was not hurt. I think he was scared that daddy was screaching like his 19-month old cousin Jade when they're fighting over a toy (actually he's the screacher, not her), but he was fine. Here's a pic of the X-RAY: After I hurt myself, I took about 5 minutes to gather my wits, then I took Jason to daycare, and drove myself to the hospital, which is quite pleasant at 8:45 AM. A few X-RAYS, and a silly post-op shoe later, and here I am on a diet of Advil and ...

Using Solaris 10 Update 3, Sun Cluster 3.2, Zones, & ZFS in a Multi-Node Cluster of Sun Fire T-2000s

It all started with a conference call with one of our customers. We wanted a way to set up some highly available systems that could be used for various beta or QA purposes, or production services, or anywhere in between as needed. We also wanted a way to maximize the resources. We had 4 servers available to us, all Sun Fire T-2000s. If we used them as straight servers, they'd be great at anything they do, right? 8 cores, 4 threads per core, 32GB of RAM. Nice. Capable of running dozens of zones without skipping a beat. Perhaps even hundreds of zones. Zones make perfect development boxes, right? You can blow them away and re-install them in a matter of minutes, or even seconds on ZFS. Zones also do pretty good as production environments as well. We're currently using a large number of zones in production, to supply a variety of services. Zones on ZFS make particularly good dev boxes because you can take frequent snapshots and roll back as desired. ** Zones with their zoneroot ...

Don't overlook the simple answers!

Today, I spent a good part of the day troubleshooting an Oracle 10g database who's db_recovery_file_dest kept filling up. Now, I'm not a DBA, by trade, just a technical generalist with a penchant for Googling. I increased the size of the db_recovery_file_dest, and 4 hours later, it was full again. I could not for the life of me figure out why the archiving and log rotation RMAN scripts weren't working. I ran them manually, and voila! problem fixed again, for a limited time. That's when it occured to me to look in /var/cron/log. Sure enough, I found the answer to all my problems. Well, not ALL of my problems, but enough of the ones I was dealing with today that I rated today a success. The oracle user's password had expired. That was it. The root cause of two database outages due to the recovery log destination filling up, and the database refusing connections, and hours of troubleshooting. An expired password. This brings me to a lesson I know well, but of...