Mountpoint for Amazon S3 – Generally Available and Ready for Production Workloads | AWS News Blog

Mountpoint for Amazon S3 is an open source file client that makes it easy for your file-aware Linux applications to connect directly to Amazon Simple Storage Service (Amazon S3) buckets. Announced earlier this year as an alpha release, it is now generally available and ready for production use on your large-scale read-heavy applications: data lakes, machine learning training, image rendering, autonomous vehicle simulation, ETL, and more. It supports file-based workloads that perform sequential and random reads, sequential (append only) writes, and that don’t need full POSIX semantics.

Mountpoint for Amazon S3 – Generally Available and Ready for Production Workloads

 

Run Bastions on Demand to Access Your AWS VPCs

Any time you have a VPC, you’ll likely need some way to gain access to the resources within the VPC from your local box. Typically, the way to do that is to run a bastion (or jumpbox) which you and your team can SSH into. The downside is that you are exposing an entry point into your network that is accessible by multiple people and running 24×7. And depending on how you manage permissions, you may not be able to restrict access to the box via IAM. Obviously, this is not ideal.

Luckily, we have Fargate.

With Fargate, we no longer need to maintain permanent bastion instances—we can create bastions when needed and tear them down when no longer in use. We can lock down bastion instances to an individual user both in terms of SSH keys and IP address. And we can restrict access via IAM to both the API used to manage bastions and to which SSH keys are used to log into an instance.

All in all, we save on infrastructure spend while reducing our attack surface.

Bastions on Demand :: The Consulting CTO

This looks like an intriguing solution to a problem that has bothered me for years. Running sshd provides an attack surface for bad actors just because it’s there. Ideally you should never expose the ssh port to the public network, even if it is well secured. Bastion hosts are a well known solution to this but one that is often not implemented for one reason or another. Turning it inot a service seems like a good idea.

Aurora Serverless MySQL Generally Available | AWS News Blog

You may have heard of Amazon Aurora, a custom built MySQL and PostgreSQL compatible database born and built in the cloud. You may have also heard of serverless, which allows you to build and run applications and services without thinking about instances. These are two pieces of the growing AWS technology story that we’re really excited to be working on. Last year, at AWS re:Invent we announced a preview of a new capability for Aurora called Aurora Serverless. Today, I’m pleased to announce that Aurora Serverless for Aurora MySQL is generally available. Aurora Serverless is on-demand, auto-scaling, serverless Aurora. You don’t have to think about instances or scaling and you pay only for what you use.

Aurora Serverless MySQL Generally Available | AWS News Blog

AWS Launches New Deep Learning AMIs for Machine Learning Practitioners

The Conda-based AMI comes pre-installed with Python environments for deep learning created using Conda. Each Conda-based Python environment is configured to include the official pip package of a popular deep learning framework, and its dependencies. Think of it as a fully baked virtual environment ready to run your deep learning code, for example, to train a neural network model. Our step-by-step guide provides instructions on how to activate an environment with the deep learning framework of your choice or swap between environments using simple one-line commands.

But the benefits of the AMI don’t stop there. The environments on the AMI operate as mutually-isolated, self-contained sandboxes. This means when you run your deep learning code inside the sandbox, you get full visibility and control of its run-time environment. You can install a new software package, upgrade an existing package or change an environment variable—all without worrying about interrupting other deep learning environments on the AMI. This level of flexibility and fine-grained control over your execution environment also means you can now run tests, and benchmark the performance of your deep learning models in a manner that is consistent and reproducible over time.

Finally, the AMI provides a visual interface that plugs straight into your Jupyter notebooks so you can switch in and out of environments, launch a notebook in an environment of your choice, and even reconfigure your environment—all with a single click, right from your Jupyter notebook browser. Our step-by-step guide walks you through these integrations and other Jupyter notebooks and tutorials.

New AWS Deep Learning AMIs for Machine Learning Practitioners | AWS AI Blog

New – USASpending.gov on an Amazon RDS Snapshot | AWS Blog

[S]tarting today, the entire public USAspending.gov database is available for anyone to copy via Amazon Relational Database Service (RDS). USAspending.gov data includes data on all spending by the federal government, including contracts, grants, loans, employee salaries, and more. The data is available via a PostgreSQL snapshot, which provides bulk access to the entire USAspending.gov database, and is updated nightly. At this time, the database includes all USAspending.gov for the second quarter of fiscal year 2017, and data going back to the year 2000 will be added over the summer. You can learn more about the database and how to access it on its AWS Public Dataset landing page.

Source: New – USASpending.gov on an Amazon RDS Snapshot | AWS Blog