AWS Opens Access Logs for Elastic Load Balancers

Today we are giving you additional insight into the operation of your Elastic Load Balancers with the addition of an access log feature. After you enable and configure this feature for an Elastic Load Balancer, log files will be delivered to the Amazon S3 bucket of your choice. The log files contain information about each HTTP and TCP request processed by the load balancer.

via Amazon Web Services Blog: Access Logs for Elastic Load Balancers.

This has been a long time coming but it is a welcome development. I’m looking forward to plowing through those access logs and running my own analysis on them.

Did You Know That Amazon Provides Free AWS Training Videos and Labs?

New to AWS and looking to gain a foundational knowledge about key AWS services? Our “Introduction to AWS” series includes free, on-demand instructional videos and labs that enable you to learn about AWS in 30 minutes or less. Start by watching a short video about an AWS service to learn about key concepts and terminology and see a step-by-step console demonstration. Next, get hands-on practice using the AWS service with a free self-paced training lab.

via AWS Training – Free Online Videos and Labs.

I didn’t, but I do now. These are great for learning about the fundamentals of AWS cloud computing. Topics covered include intros to Simple Storage Service (S3), Elastic Compute Cloud (EC2), Identity and Access Management (IAM), Relational Database Service (RDS), and Elastic Load Balancing. Used along with the AWS Free Tier these represent an excellent for anyone to become more familiar with running servers and systems and getting a better understanding of cloud computing.

Amazon Announces General Availability of AWS CLI

We are pleased to announce the General Availability (GA) release of the AWS Command Line Interface, a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. The GA release supports 23 services and includes new file commands for Amazon S3. Using a file system command syntax, you can easily list the contents of online buckets, upload a folder full of files, and synchronize local files with objects stored in Amazon S3.

To get started with the AWS CLI, see the User Guide.

via Announcing AWS Command Line Interface – General Availability.

For all you fans of the command line, now you can get some real work done. Bonus: the AWS-CLI is written in Python and is open source so you can follow along on GitHub

After 3+ Years Amazon RDS Cloud Database Service Achieves “General Availability”

The Amazon Relational Database Service (RDS) was designed to simplify one of the most complex of all common IT activities: managing and scaling a relational database while providing fast, predictable performance and high availability.
RDS in Action
In the 3.5 years since we launched Amazon RDS, a lot has happened. Amazon RDS is now being used in mission-critical deployments by tens of thousands of businesses of all sizes. We now process trillions of I/O requests each month for these customers. We’re seeing strong adoption in enterprises such as Samsung and Unilever, web-scale applications like Flipboard and Airbnb, and large-scale organizations like NASA JPL and Obama for America.

via Amazon Web Services Blog: Amazon RDS: 3.5 years, 3 Engines, 9 Regions, 50+ Features and Tens of Thousands of Customers.

As part of recent rebuild of Classcaster, I shifted the MySQL database for the system to Amazon RDS. I found the process of importing an existing database to be straight forward and was up and running in no time. Since it is just an instance of MySQL running in the Amazon cloud, I administer it as I do the other my other MySQL databases running on AWS EC2 instance using SQLyog on Mac and Windows and MySQL Workbench on Linux .

As far as performance goes, RDS seems a bit more responsive than the AWS EC2 hosted databases I run. It is important to note that it is possible to knock it over by overloading the connection pool . Logging and backups are handled well and access to these from the RDS dashboard is pretty good. Although I haven’t tried it yet, the features exist to scale the database as needed. I may take advantage of some of this if I decide to move our main databases to RDS.

Overall, I’d recommend RDS as a good way to get a database up and running quickly and to provide a stable backed for your systems.

 

I do like the latest version of Workbench, especially its real time monitor features, so I’m likely to move to it on all platforms.Powered by Hackadelic Sliding Notes 1.6.5
As I discovered while dealing with one of those way too frequent brute force attacks against WordPress.Powered by Hackadelic Sliding Notes 1.6.5

AWS SDK Now Available For Node.js

The General Availability (GA) release of the AWS SDK for Node.js is now available and can be installed through npm as aws-sdk. We have added a number of features since the preview release including bound parameters, streams, IAM roles for EC2 instances, version locking, and proxies.

via Amazon Web Services Blog: AWS SDK for Node.js – Now Generally Available.

With the availability of the AWS SDK for node.js it is now possible to do things like add S3 storage functionality directly into your node.js app. Adding features of AWS to the real time interactivity of node.js will just make it more attractive as a platform.

CALI’s Looking For a Sys Admin, Here’s A Brief History

Usually I might not be too keen to lose some of my job responsibilities, but in this case I couldn’t be happier. CALI is adding a systems administrator to wrangle all our servers, more than 20 at last count, on a full time basis. Since I started working at CALI 9 years ago my time has been split between web/database/cool project development and administering CALI’s servers and systems.

Back in 2003 that meant riding herd on an aging Windows NT server, a Win2K server handling some video streaming, and a couple of dark servers whose futures where not yet set. Of course the servers where in Chicago and I was in Atlanta. Things changed rapidly. The dark servers where brought online running Linux and our production web and storage systems where built out on the LAMP stack. Within a couple of years I added 3 servers at Emory in Atlanta to handle the increased demand for CALI services and resources online.

It wasn’t very long before we were struggling with large spikes in demand that were taxing our servers and we needed a better solution. Simply increasing the amount of hardware we owned wasn’t really an option since we were borrowing space and bandwidth from the law schools at Kent and Emory. At just the right time, Amazon Web Services came along and CALI jumped into the cloud.

Moving our web infrastructure to the AWS cloud gave us tremendous flexibility at a reasonable cost. After some trial and error I was able to configure a load-balanced web cluster that could be scaled up and down as demand for CALI resources and services flowed over the course of an academic year. Using the cloud meant that I could provision some services on their own servers so that things like Apache Solr and Asterisk could stand alone. As a result of the move to the cloud, by the beginning of 2011 I found myself administering 15 to 20 servers in the cloud alone (exact numbers depended on the time of year) plus another half dozen physical servers in 2 geographically dispersed locations.

All that sounds like a full time job itself, but that was only half the job. While all that infrastructure was being built out I was also developing 3 different versions of the CALI website, the Classcaster phone-to-blog system, a couple of iterations of eLangdell, the Free Law Reporter, and dealing with various other projects. Working on these development projects is what I really enjoy, but they often get pushed aside since I need to keep the servers running as a priority.

Now CALI is hiring a systems administrator to take over (or clean up) the running of our infrastructure. I’m looking forward to handing the keys of the cloud over to someone else so I can focus on all of the great projects that are in the pipeline. When can you start?

Details on the CALI sys admin job, which is located in our Chicago office, are at http://cca.li/6J.

AWS Adds MSFT Windows Server 2K8 to Free Tier, Now Everyone Can Give Cloud a Whirl!

The AWS Free Usage Tier now allows you to run Microsoft Windows Server 2008 R2 on an EC2 t1.micro instance for up to 750 hours per month. This benefit is open to new AWS customers and to those who are already participating in the Free Usage Tier, and is available in all AWS Regions with the exception of GovCloud. This is an easy way for Windows users to start learning about and enjoying the benefits of cloud computing with AWS.

Amazon Web Services Blog: AWS Free Usage Tier now Includes Microsoft Windows on EC2

This addition means there is no reason not to try out AWS now. Even those steadfast MSFT admins with no interest in Linux can give the cloud a whirl for free for a year. Go ahead, you know you’ve been dying to find out what all the fuss is about.

 

 

Amazon Adds Import/Export of Data From External Storage Devices Shipped to AWS Data Centers

AWS Import/Export accelerates moving large amounts of data into and out of AWS using portable storage devices for transport. AWS transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network and bypassing the Internet. For significant data sets, AWS Import/Export is often faster than Internet transfer and more cost effective than upgrading your connectivity.

via AWS Import/Export.

This is an interesting development for folks who need to move really large amounts of data on or off of the cloud. According to a table in the article the cost of shipping the drive and having Amazon do the transfer at the data center becomes cost effective when dealing with data amounts over 1TB being pushed over a 10Mbps connection. That is a lot of data[1].

The service will accept eSATA, USB2.0, and internal SATA drives and transfer your data to an S3 bucket or an EBS Snapshot.

[1]For reference, a single copy of all the data on CALI’s public servers (lessons, court opinions, podcasts, conference video, as so on) weighs in at just over a 1TB.

Is The Great Amazon EBS Failure the Beginning of the End For Disk Abstraction?

The promise of network block storage is wonderful: Take a familiar abstraction (the disk), sprinkle on some magic cloud pixie dust so that it’s completely reliable, available over the same cheap network you’re using for app traffic, map it to any instance in a datacenter regardless of network topology, make it so cheap it’s practically free, and voila, we can have our cake and eat it too! It’s the holy grail many a storage vendor, most of whom with decades experience in storage systems and engineering teams thousands strong have chased for a long, long time. The disk that never dies. The disk that’s not a disk.

The reality, however, is that the disk has never been a great abstraction, and the long history of crappy implementations has meant that many behavioral workarounds have found their way far up the stack. The best case scenario is that a disk device breaks and it’s immediately catastrophic taking your entire operating system with it. Failure modes go downhill from there. Networks have their own set of special failure modes too. When you combine the two, and that disk you depend on is sitting on the far side of the network from where your operating system is, you get a combinatorial explosion of complexity.

Magical Block Store: When Abstractions Fail Us « Joyeur.

Fascinating piece on the perils of disk abstraction. Raises a very good question: Why do we worry about disks at all in the cloud? I wonder how many folks would just be tossing data into the cloud without the comfy metaphor of disk and machine to lean on?

Best Description of the Likely Cascading Failure That Took Out EC2

Let’s think of a failure mode here: Network congestion starts making your block storage environment think that it has lost mirrors, you begin to have resilvering happen, you begin to have file systems that don’t even know what they’re actually on start to groan in pain, your systems start thinking that you’ve lost drives so at every level from the infrastructure service all the way to “automated provisioning-burning-in-tossing-out” scripts start ramping up, programs start rebooting instances to fix the “problems” but they boot off of the same block storage environment.

You have a run on the bank. You have panic. Of kernels. Or language VMs. You have a loss of trust so you check and check and check and check but the checking causes more problems.

via On Cascading Failures and Amazon’s Elastic Block Store « Joyeur.

Closing in on 36 hours since this melt down began, Amazon has still not been able to restore all of the EC2 instances and EBS volumes that where knocked offline in the #SkynetMassacre. This article is the best explanation of what most likely happened. And the scary part is that it will happen again. And again.

Sadly, there is not a lot to do but try and build enough redundancy into your systems to survive this sort of thing. But it is likely that building that redundancy is going to bring about another melt down at some point. Guess I’ll just need to keep thinking about how to deal with this sort of thing.