Shared posts

28 Aug 17:33

Improved Audit Log

by dewski

We've just released some major improvements to our organization audit logs. As an organization admin, you can now see a running list of events as they're generated across your organization, or you can search for specific activities performed by the members of your org. This data provides you with better security insights and gives you the ability to audit account, team, and repository access over time.

Audit Log

The audit log exposes a number of events like repository deletes, billing updates, new member invites, and team creation. You can see the activities of individual team members, along with a map that highlights the location where events originated. Using the new query interface, you can then filter all these events by the action performed, the team member responsible, the date, repository, and location.

Example Query

For more information on the audit log, check out the documentation.

17 Aug 11:53

Refactoring

17 Aug 11:53

The npm registry

11 Aug 14:51

The Power of Saying No (To Sugar)

Over the last couple of weeks, I've been avoiding sugar. Not just avoiding eating spoon-fulls of crystal sugar, but avoiding food and drink that contains sugar.

I've thinking about reducing my sugar intake for a while now. There's been enough change in how science sees the sources of weight gain to be convincing.

But regardless, I found myself eating pie on the weekend, grab a sweet snack regularly (the perils of working from home) or regularly get a cookie or cheese cake at the coffee shop. It just seems like the right thing to do.

Of course there'd always be the convincing argument to myself that I could stop anytime I want, the classic trap of letting your irrational self get in the way of rational decisions. Just ask any smoker.

But then I came across an article that pointed to actively saying no being a possible answer. Studies showed that people who said "I don't" were much more likely to resist something than people who said "I can't."

Pretty remarkable, and I wanted to turn that into an experiment with myself.

It made me think of the day when I stopped smoking, on March 30, 1999.

All I said to myself was "I'm not smoking anymore."

I chucked my remaining cigarettes without much further thought, and that was the end of it.

The hardest part of breaking out of a habit is finding a replacement. Rather than resort to a cigarette after lunch or other meals, I went for a coffee instead.

Yes, this is how my coffee affinity started.

There's something powerful in consciously saying no. One day, I decided to just say no to sugar.

That meant cutting out delicious things like cookies, cake, pie and everything else that contains processed sugar, but it's for the good of the bigger picture. Most of these are non-essential foods.

There's no rational downside to saying no to sugar.

But the real power is in consciously saying no to something. Whether it's a habit you want to get rid of, or whether it's a feature you want to add to your product or simple deciding on what you want to do with your day, your month, your life.

Saying no to something can have an incredible effect on your conscious to actually go through with it.

Knowing what not to do can be quite liberating for your mind too. It leaves room for other, more important things to do.

11 Aug 14:48

How to be Great at Customer Support

Portland I

When we set out to build Travis CI into a product and a business, I had one thing on my agenda that I wanted us to be good at, and that's customer support.

Offering an infrastructure product, we knew upfront that customers are going to have problems setting up their projects, and we knew that there'd be the occasional hard problem to solve.

Customer support turned into our number one priority to get right, and here's how we approached it.

Below are the simple hacks we've learned and applied over the last two years to make sure customers have a good experience when they interact with us. You can apply any and all of these steps instantly to improve your own customer support.

Remember Your Last Bad Support Experience

The simplest thing to help you how to do great customer support is your last bad experience with another company.

We've all had them, responses with blunt links to knowledge bases, canned responses seemingly matching keywords, and a customer support representative who's driven more by the number of calls he's making per hour than by the amount of happiness he's brought a company's customers.

If I'd ask you to sit down and jot down your last five bad experiences with a product and their customer support, the last tweets you fired off into the ether about a bad experience, you'll have a useful list in no time.

All you need to do now is figure out what annoyed you about these responses and incidents and figure out how you'd do it differently, how you would've wanted to be treated.

Great customer support people go out of their way to help a customer, they're the frontline of delivering happiness directly to people beyond simply selling them a good product.

First, Admit You're the Problem

When a customer is frustrated, you can read it in their emails asking for help. Some customers prefer to be snarky, others can say things that aren't very nice. People will say negative things about your product. We may not like it, and we may get easily offended when they do, but that shouldn't impact a positive response.

When your customer is having troubles, you need to think about their pain. When they're frustrated, it's because of your product and your decisions. Always assume that the problem is on your end when a customer is having troubles.

Adopting this approach makes you think twice about your response. It removes a barrier, it frees you from responding with a snarky email or tweet and helps you focus on the problem.

Empathy, Empathy, Empathy

The most important value of interacting with customers, heck, with people, is empathy. Understand that your product is getting in their way rather than solve a problem, and really think about the issue.

It's okay to take a step back, look at all the details you have available and consider the view of the customer.

Empathy means taking the time to understand other people's emotions, their train of thought.

Empathy is the core value of customer support. It's the one thing that makes you great at customer support.

Just coincidentally, empathy also makes you a great customer to work with.

We're all humans, we're all driven by our own goals, and customer support is your one means to align them.

And Honesty Too

If your product can't do something your customer wants, or you can't give them a solution right now, be honest about it.

There's nothing wrong with saying "I don't know", as long as you're willing to take more time to investigate a possible solution.

If you can't find one, it's okay to admit that. We're all humans, and not every problem can be solved. Not every problem should be solved, at least not by your product.

Offer Solutions rather than Excuses

Customers aren't interested in hearing excuses, they're interested in one thing and one thing only, a solution to their problem.

If you can't offer one, that's okay, but a great customer support person goes out of their way to find one, even if it means using another product.

Giving a customer a solution, even if it doesn't involve your product, will make them happier than giving them none, than giving them excuses.

Learn How to Talk to People

You won't turn into a great customer support person overnight, but you can give your brain gentle nudges on how to talk to people better.

For me, reading a few books has helped a lot in shaping my languages. Two in particular have been invaluable, and I'd recommend them to anyone. They're useful not just for customer support interactions, but for all kinds of people interactions.

"How To Win Friends and Influence People" is a timeless classic, and it taught me a lot about empathy and how to approach people in general and disgruntled customers in particular. It's the one book you should read no matter what you do. It shaped my interactions a lot.

"Drop the Pink Elephant" is the perfect companion. It teaches you about saying what you really mean rather than focus on things that remove clarity from a conversation. It's shaped customer interactions and the way we write our public postmortems.

Customer support is your number one differentiator as a company. It takes a lot of work and effort, but it's your best way to make your customers happy, to have meaningful interactions with them.

It pays in the long term to make sure you're doing it right. Great customer support experiences can't be measured in money or in any meaningful way, but it'll help you get loyal customers. Knowing that you're willing to help no matter the problem gives every customer the incentive to come back for more.

But most importantly, great customer support makes your customers feel like they're treated as humans.

07 Aug 12:05

Route 53 Update - Domain Name Registration, Geo Routing, and a Price Reduction

by Jeff Barr

Amazon Route 53 is a highly available and scalable Domain Name Service (DNS), including a powerful Health Checking Service. Today we are extending Route 53 with support for domain name registration and management and Geo DNS. We are also reducing the price for Route 53 queries! Let's take a closer look at each one of these items.

Domain Name Registration and Management
I registered my first domain name in 1995! Back then, just about every aspect of domain management and registration was difficult, expensive, and manual. After you found a good name, you had to convince one or two of your tech-savvy friends to host your DNS records, register the name using an email-based form, and then bring your site online. With the advent of web-based registration and multiple registrars the process became a lot smoother and more economical.

Up until now, you had to register your domain at an external registrar, create the Hosted Zone in Route 53, and then configure your domain's entry at the registrar to point to the Route 53 name servers. With today's launch of Route 53 Domain Name Registration, you can now take care of the entire process from within the AWS Management Console (API access is also available, of course). You can buy, manage, and transfer (both in and out) domains from a wide selection of generic and country-specific top-level domains (TLDs). As part of the registration process, we'll automatically create and configure a Route 53 Hosted Zone for you. You can think up a good name, register it, and be online with static (Amazon Simple Storage Service (S3)) or dynamic content (Amazon Elastic Compute Cloud (EC2), AWS Elastic Beanstalk, or AWS OpsWorks) in minutes.

If you, like many other AWS customers, own hundreds or thousands of domain names, you know first-hand how much effort goes in to watching for pending expirations and renewing your domain names. By transferring your domain to Route 53, you can take advantage of our configurable expiration notification and our optional auto-renewal. You can avoid embarrassing (and potentially expensive) mistakes and you can focus on your application instead of on your domain names. You can even reclaim the brain cells that once stored all of those user names and passwords.

Let's walk through the process of finding and registering a domain name using the AWS Management Console and the Route 53 API.

The Route 53 Dashboard gives me a big-picture view of my Hosted Zones, Health Checks, and Domains:

I begin the registration process by entering the desired name and selecting a TLD from the menu:

The console checks on availability within the selected domain and in some other popular domains. I can add the names I want to the cart (.com and .info in this case):


Then I enter my contact details:

I can choose to enable privacy protection for my domain. This option will hide most of my personal information from the public Whois database in order to thwart scraping and spamming.

When everything is ready to go, I simply agree to the terms and my domain(s) will be registered:

I can see all of my domains in the console:

I can also see detailed information on a single domain:

I can also transfer domains into or out of Route 53:

As I mentioned earlier, I can also investigate, purchase, and manage domains through the Route 53 API. Let's say that you are picking a name for a new arrival to your family and you want to make sure that you can acquire a suitable domain name (in most cases, consultation with your significant other is also advisable). Here's some code to automate the entire process! I used the AWS SDK for PHP.

The first step is to set the desired last name and gender, and the list of acceptable TLDs:

$LastName = 'Barr';
$Gender   = 'F';
$TLDs     = array('.com', '.org');

Then I include the AWS SDK and the PHP Simple HTML DOM and create the Route 53 client object:

require 'aws.phar';
require 'simple_html_dom.php';

// Connect to Route 53
$Client = \Aws\Route53Domains\Route53DomainsClient::factory(array('region' => 'us-east-1'));

Now I need an array of the most popular baby names. I took this list and parsed the HTML to create a PHP array:

$HTML       = file_get_html("http://www.babycenter.com/top-baby-names-2013");
$FirstNames = array();

$Lists = $HTML->find('table tr ol');
$Items = $Lists[($Gender == 'F') ? 0 : 1];

foreach ($Items->find('li') as $Item)
{
  $FirstNames[] = $Item->find('a', 0)->innertext;
}

With the desired last name and the list of popular first names in hand (or in memory to be precise), I can generate interesting combinations and call the Route 53 checkDomainAvailability function to see if they are available:

foreach ($FirstNames as $FirstName)
{
  foreach ($TLDs as $TLD)
  {
    $DomainName = $FirstName . '-' . $LastName . $TLD;

    $Result = $Client->checkDomainAvailability(array(
      'DomainName'  => $DomainName,
      'IdnLangCode' => 'eng'));
  }
  echo "{$DomainName}: {$Result['Availability']}\n";
}

I could also choose to register the first available name (again, consultation with your significant other is recommended here). I'll package up the contact information since I'll need it a couple of times:

$ContactInfo = array(
  'ContactType'      => 'PERSON',
  'FirstName'        => 'Jeff',
  'LastName'         => 'Barr',
  'OrganizationName' => 'Amazon Web Services',
  'AddressLine1'     => 'XXXX  Xth Avenue',
  'City'             => 'Seattle',
  'State'            => 'WA',
  'CountryCode'      => 'US',
  'ZipCode'          => '98101',
  'PhoneNumber'      => '+1.206XXXXXXX',
  'Email'            => 'jbarr@amazon.com');

And then I use the registerDomain function to register the domain:

if ($Result['Availability'] === 'AVAILABLE')
{
  echo "Registering {$DomainName}\n");

  $Result = $Client->registerDomain(array(
    'DomainName'              => $DomainName,
    'IdnLangCode'             => 'eng',
    'AutoRenew'               => true,
    'DurationInYears'         => 1,
    'BillingContact'          => $ContactInfo,
    'RegistrantContact'       => $ContactInfo,
    'TechContact'             => $ContactInfo,
    'AdminContact'            => $ContactInfo,
    'OwnerPrivacyProtected'   => true,
    'AdminPrivacyProtected'   => true,
    'TechPrivacyProtected'    => true,
    'BillingPrivacyProtected' => true));
}

Geo Routing
Route 53's new Geo Routing feature lets you choose the most appropriate AWS resource for content delivery based on the location where the DNS queries originate. You can now build applications that respond more efficiently to user requests, with responses that are wholly appropriate for the location. Each location (a continent, a country, or a US state) can be independently mapped to static or dynamic AWS resources. Some locations can receive static resources served from S3 while others receive dynamic resources from an application running on EC2 or Elastic Beanstalk.

You can use this feature in many different ways. Here are a few ideas to get you started:

  • Global Applications - Route requests to Amazon Elastic Compute Cloud (EC2) instances hosted in an AWS Region that is in the same continent as the request. You could do this to maximize performance or to meet legal or regulatory requirements.
  • Content Management - Provide users access with access to content that has been optimized, customized, licensed, or approved for their geographic location. For example, you could choose to use distinct content and resources for red and blue portions of the United States. Or, you could run a contest or promotion that is only valid in certain parts of world and use this feature to provide an initial level of filtering.
  • Consistent Endpoints - Set up a mapping of locations to endpoints to ensure that a particular location always maps to the same endpoint. If you are running a MMOG, routing based on location can increase performance, reduce latency, give you better control over time-based scaling, and increase the likelihood that users with similar backgrounds and cultures will participate in the same shard of the game.

To make use of this feature, you simply create some Route 53 Record Sets that have the Routing Policy set to Geolocation. Think of each Record Set as a mapping from a DNS entry (e.g. www.jeff-barr.com) to a particular AWS resource an S3 bucket, an EC2 instance, or an Elastic Load Balancer. With today's launch, each Record Set with a Geolocation policy becomes effective only when the incoming request for the DNS entry originates within the bounds (as determined by an IP to geo lookup) of a particular continent, country, or US state. The Record Sets form a hierarchy in the obvious way and the most specific one is always used. You can also choose to create a default entry that will be used if no other entries match.

You can set up this feature from the AWS Management Console, the Route 53 API, or the AWS Command Line Interface (CLI). Depending on your application, you might want to think about an implementation that generates Record Sets based on information coming from a database of some sort.

Let's say that I want to provide static content to most visitors to www.jeff-barr.com, and dynamic content to visitors from Asia. Here's what I need to do. First I create a default Record Set for "www" that points to my S3 bucket:

Then I create another one "www", this one Geolocated for Asia. This one points to an Elastic Load Balancer:

Price Reduction
Last, but certainly not least, I am happy to tell you that we have reduced the prices for Standard and LBR (Latency-Based Routing) queries by 20%. The following prices go in to effect as of August 1, 2014:

  1. Standard Queries - $0.40 per million queries for the first billion queries per month; $0.20 per million queries after that.
  2. LBR Queries - $0.60 per million queries for the first billion queries per month; $0.30 per million queries after that.
  3. Geo DNS Queries - $0.70 per million queries for the first billion queries per month; $0.35 per million queries after that.

Available Now
These new features are available now and the price reduction goes in to effect tomorrow.

-- Jeff;

PS - Thanks to Steve Nelson of AP42 for digging up the Internic Domain Registration Template!

07 Aug 12:03

AWS Trusted Advisor For Everyone

by Jeff Barr

AWS Trusted Advisor is your customized cloud expert! It helps you to observe best practices for the use of AWS by inspecting your AWS environment with an eye toward saving money, improving system performance and reliability, and closing security gaps. Since we launched Trusted Advisor in 2013, our customers have viewed over 1.7 million best-practice recommendations for cost optimization, performance improvement, security, and fault tolerance and have reduced their costs by about 300 million dollars.

Today I have two big pieces of news for all AWS users. First, we are making a set of four Trusted Advisor best practices available at no charge. Second, we are moving the Trusted Advisor into the AWS Management Console.

Four Best Practices at no Charge
The following Trusted Advisor checks are now available to all AWS users at no charge:

Service Limits Check - This check inspects your position with regard to the most important service limits for each AWS product. It alerts you when you are using more than 80% of your allocation resources such as EC2 instances and EBS volumes.

Security Groups - Specific Ports Unrestricted Check - This check will look for and notify you of overly permissive access to your EC2 instances and help you to avoid malicious activities such as hacking, denial-of-service attacks, and loss of data.

IAM Use Check - This check alerts you if you are using account-level credentials to control access to your AWS resources instead of following security best practices by creating users, groups, and roles to control access to the resources.

MFA on Root Account Check - This check recommends the use of multi-factor authentication (MFA), to improve security by requiring additional authentication data from a secondary device.

You can subscribe to the Business or Enterprise level of AWS Support in order to gain access to the remaining 33 checks (with more on the way).

Trusted Advisor in the Console
The Trusted Advisor is now an integral part of the AWS Management Console. We have fine-tuned the user interface to simplify navigation and to make it even easier for you to find and to act on recommendations and to filter out recommendations that you no longer want to see.

Let's take a tour of the Trusted Advisor, starting from the Dashboard. I can see a top-level summary of all four categories of checks at a glance:

Each category actually contains four distinct links. If I click on the large icon associated with each category I can see a summary of the checks without regard to their severity or status. Clicking on the smaller green, orange, or red icons will take you to items with no problems, items where investigation is recommended, and items where action is recommended, respectively. It looks like I have room for some improvements in my fault tolerance:

I can use the menu at the top to filter the checks (this is equivalent to using the green, orange, and red icons):

If I sign up for the Business or Enterprise level of support, I can also choose to tell Trusted Advisor to selectively exclude certain resources from the checks. In the following case, I am running several Amazon Relational Database Service (RDS) instances without Multi-AZ. They are test databases and high-availability isn't essential so I can exclude them from the test results:

I can also download the results of each check for further analysis or distribution:

I can even ask Trusted Advisor to send me a status update each week:

With the introduction of the console, we are also introducing a new, IAM-based model to control access to the results of each check and the actions associated with them in the console. To learn more about this important new feature, read about Controlling Access to the Trusted Advisor Console.

Available Now
As always (I never get tired of saying this), these new features are available now and you can start using them today!

-- Jeff;

07 Aug 12:01

Updated IAM Console

by Jeff Barr

If you are an AWS user, I sincerely hope that you are using AWS Identity and Access Management (IAM) to control access to your services and your resources! As you probably know, you can create AWS users and groups and use permissions to allow and deny access to many aspects of AWS.

Today we are launching an update to the IAM console to make it even easier for you to mange your IAM settings, even if you, like many other AWS customers have created hundreds of IAM users, groups, roles, and policies.

The redesigned console streamlines management of large resource lists, eliminates the need to switch between tabs to accomplish common tasks, and offers a better experience on mobile devices. The new Security Checklist will help you to implement the recommendations for IAM best practices; the dashboard is cleaner and simpler, and you can now scroll through long lists of resources without the need to page back and forth. The sign-in link for IAM users is now more prominent.

To learn more, read Introducing the Redesigned AWS IAM console on the AWS Security Blog.

-- Jeff;

PS - Last month we also added Enhanced Password Management and Credential Reports to IAM.

25 Jul 08:57

Elastic Load Balancing Connection Timeout Management

by Jeff Barr

When your web browser or your mobile device makes a TCP connection to an Elastic Load Balancer, the connection is used for the request and the response, and then remains open for a short amount of time for possible reuse. This time period is known as the idle timeout for the Load Balancer and is set to 60 seconds. Behind the scenes, Elastic Load Balancing also manages TCP connections to Amazon EC2 instances; these connections also have a 60 second idle timeout.

In most cases, a 60 second timeout is long enough to allow for the potential reuse that I mentioned earlier. However, in some circumstances, different idle timeout values are more appropriate. Some applications can benefit from a longer timeout because they create a connection and leave it open for polling or extended sessions. Other applications tend to have short, non- recurring requests to AWS and the open connection will hardly ever end up being reused.

In order to better support a wide variety of use cases, you can now set the idle timeout for each of your Elastic Load Balancers to any desired value between 1 and 3600 seconds (the default will remain at 60). You can set this value from the command line or through the AWS Management Console.

Here's how to set it from the command line:

$ elb-modify-lb-attributes myTestELB --connection-settings "idletimeout=120" --headers

And here is how to set it from the AWS Management Console:

This new feature is available now and you can start using it today! Read the documentation to learn more.

-- Jeff;

15 Jul 07:45

Farewell Node.js

by TJ Holowaychuk

Leaving node.js land

Continue reading on Medium »

15 Jul 07:45

Profiling Golang

by TJ Holowaychuk

Concise guide to profiling Go programs

Continue reading on Medium »

15 Jul 07:45

Go packages

by TJ Holowaychuk

Similar to any other large communities it becomes increasingly hard to find quality packages for any given task, so this is simply a list…

Continue reading on Medium »

15 Jul 07:38

Store and Monitor OS & Application Log Files with Amazon CloudWatch

by Jeff Barr

When you move from a static operating environment to a dynamically scaled, cloud-powered environment, you need to take a fresh look at your model for capturing, storing, and analyzing the log files produced by your operating system and your applications. Because instances come and go, storing them locally for the long term is simply not appropriate. When running at scale, simply finding storage space for new log files and managing expiration of older ones can become a chore. Further, there's often actionable information buried within those files. Failures, even if they are one in a million or one in a billion, represent opportunities to increase the reliability of your system and to improve the customer experience.

Today we are introducing a powerful new log storage and monitoring feature for Amazon CloudWatch. You can now route your operating system, application, and custom log files to CloudWatch, where they will be stored in durable fashion for as long as you'd like. You can also configure CloudWatch to monitor the incoming log entries for any desired symbols or messages and to surface the results as CloudWatch metrics. You could, for example, monitor your web server's log files for 404 errors to detect bad inbound links or 503 errors to detect a possible overload condition. You could monitor your Linux server log files to detect resource depletion issues such as a lack of swap space or file descriptors. You can even use the metrics to raise alarms or to initiate Auto Scaling activities.

Vocabulary Lesson
Before we dig any deeper, let's agree on some basic terminology! Here are some new terms that you will need to understand in order to use CloudWatch to store and monitor your logs:

  • Log Event - A Log Event is an activity recorded by the application or resource being monitored. It contains a timestamp and raw message data in UTF-8 form.
  • Log Stream - A Log Stream is a sequence of Log Events from the same source (a particular application instance or resource).
  • Log Group - A Log Group is a group of Log Streams that share the same properties, policies, and access controls.
  • Metric Filters - The Metric Filters tell CloudWatch how to extract metric observations from ingested events and turn them in to CloudWatch metrics.
  • Retention Policies - The Retention Policies determine how long events are retained. Policies are assigned to Log Groups and apply to all of the Log Streams in the group.
  • Log Agent - You can install CloudWatch Log Agents on your EC2 instances and direct them to store Log Events in CloudWatch. The Agent has been tested on the Amazon Linux AMIs and the Ubuntu AMIs. If you are running Microsoft Windows, you can configure the ec2config service on your instance to send systems logs to CloudWatch. To learn more about this option, read the documentation on Configuring a Windows Instance Using the EC2Config Service.

Getting Started With CloudWatch Logs
In order to learn more about CloudWatch Logs, I installed the CloudWatch Log Agent on the EC2 instance that I am using to write this blog post! I started by downloading the install script:

$ wget https://s3.amazonaws.com/aws-cloudwatch/downloads/awslogs-agent-setup-v1.0.py

Then I created an IAM user using the policy document provided in the documentation and saved the credentials:

I ran the installation script. The script downloaded, installed, and configured the AWS CLI for me (including a prompt for AWS credentials for my IAM user), and then walked me through the process of configuring the Log Agent to capture Log Events from the /var/log/messages and /var/log/secure files on the instance:

Path of log file to upload [/var/log/messages]: 
Destination Log Group name [/var/log/messages]: 

Choose Log Stream name:
  1. Use EC2 instance id.
  2. Use hostname.
  3. Custom.
Enter choice [1]: 

Choose Log Event timestamp format:
  1. %b %d %H:%M:%S    (Dec 31 23:59:59)
  2. %d/%b/%Y:%H:%M:%S (10/Oct/2000:13:55:36)
  3. %Y-%m-%d %H:%M:%S (2008-09-08 11:52:54)
  4. Custom
Enter choice [1]: 1

Choose initial position of upload:
  1. From start of file.
  2. From end of file.
Enter choice [1]: 1

The Log Groups were visible in the AWS Management Console a few minutes later:

Since I installed the Log Agent on a single EC2 instance, each Log Group contained a single Log Stream. As I specified when I installed the Log Agent, the instance id was used to name the stream:

The Log Stream for /var/log/secure was visible with another click:

I decided to track the "Invalid user" messages so that I could see how often spurious login attempts were made on my instance. I returned to the list of Log Groups, selected the stream, and clicked on Create Metric Filter. Then I created a filter that would look for the string "Invalid user" (the patterns are case-sensitive):

As you can see, the console allowed me to test potential filter patterns against actual log data. When I inspected the results, I realized that a single login attempt would generate several entries in the log file. I was fine with this, and stepped ahead, named the filter and mapped it to a CloudWatch namespace and metric:

I also created an alarm to send me an email heads-up if the number of invalid login attempts grows to a suspiciously high level:

With the logging and the alarm in place, I fired off a volley of spurious login attempts from another EC2 instance and waited for the alarm to fire, as expected:

I also have control over the retention period for each Log Group. As you can see, logs can be retained forever (see my notes on Pricing and Availability to learn more about the cost associated with doing this):

Elastic Beanstalk and CloudWatch Logs
You can also generate CloudWatch Logs from your Elastic Beanstalk applications. To get you going with a running start, we have created a sample configuration file that you can copy to the .ebextensions directory at the root of your application. You can find the files at the following locations:

Place CWLogsApache-us-east-1.zip in the folder, then build and deploy your application as normal. Click on the Monitoring tab in the Elastic Beanstalk Console, and then press the Edit button to locate the new resource and select it for monitoring and graphing:

Add the desired statistic, and Elastic Beanstalk will display the graph:

To learn more, read about Using AWS Elastic Beanstalk with Amazon CloudWatch Logs.

Other Logging Options
You can push log data to CloudWatch from AWS OpsWorks, or through the CloudWatch APIs. You can also configure and use logs using AWS CloudFormation .

In a new post on the AWS Application Management Blog, Using Amazon CloudWatch Logs with AWS OpsWorks, my colleague Chris Barclay shows you how to use Chef recipes to create a scalable, centralized logging solution with nothing more than a couple of simple recipes.

To learn more about configuring and using CloudWatch Logs and Metrics Filters through CloudFormation, take a look at the Amazon CloudWatch Logs Sample. Here's an excerpt from the template:

"404MetricFilter": {
    "Type": "AWS::Logs::MetricFilter",
    "Properties": {
        "LogGroupName": {
            "Ref": "WebServerLogGroup"
        },
        "FilterPattern": "[ip, identity, user_id, timestamp, request, status_code = 404, size, ...]",
        "MetricTransformations": [
            {
                "MetricValue": "1",
                "MetricNamespace": "test/404s",
                "MetricName": "test404Count"
            }
        ]
    }
}

Your code can push a single Log Event to a Long Stream using the putLogEvents function. Here's a PHP snippet to get you started:

$result = $client->putLogEvents(array(
    'logGroupName'  => 'AppLog,
    'logStreamName' => 'ThisInstance',
    'logEvents'     => array(
        array(
            'timestamp' => time(),
            'message'   => 'Click!',
        )
    ),
    'sequenceToken' => 'string',
));

Pricing and Availability
This new feature is available now in US East (Northern Virginia) Region and you can start using it today.

Pricing is based on the volume of Log Entries that you store and how long you choose to retain them. For more information, please take a look at the CloudWatch Pricing page. Log Events are stored in compressed fashion to reduce storage charges; there is 26 bytes of storage overhead per Log Event.

-- Jeff;

15 Jul 07:34

Introducing a simpler, faster GitHub for Mac

by alanjrogers

Following the recent release of GitHub for Windows 2.0, we’ve been working hard to bring our two desktop apps closer together.

We’ve just shipped a significant new update to GitHub for Mac, with simplified navigation and a renewed focus on your cloned repositories.

With this update, you’ll be able to spend less time navigating lists of respositories, and more time focusing on your repositories and your branches.

Repositories Next

Simplified Navigation

The sidebar now features all your repositories grouped by their origin, and the new toolbar lets you create, clone, and publish additional repositories quickly. You can also press ⇧⌘O to filter local repositories from those associated with GitHub or GitHub Enterprise, and switch between them.

Cloning repositories from GitHub

Fewer steps are required to clone repositories from GitHub Enterprise or GitHub.com. You can now press ⌃⌘O, type the repository name, and then press Enter to clone the repositories you want and need.

Cloning GitHub Repositories

Switching and creating new branches

The branch popover (⌘B) has moved to the new toolbar, and now has a “Recent Branches” section that provides a convenient way to switch between all of your in-progress branches.

Branch creation (⇧⌘N) has moved to its own popover, and you can now create a new branch from any existing branch.

Switching and creating new branches

How do I get it?

GitHub for Mac will automatically update itself to the latest version. To update right away, open the “GitHub” menu, then “Check For Updates…”, or visit mac.github.com to download the latest release.

NOTE: This release and future releases of GitHub for Mac require OS X 10.8 or later. If you are still running OS X 10.7, you will not be updated to this release.

Feedback

We’d love to hear what you think about this release. If you have any comments, questions or straight-up bug reports, please get in touch.

27 Jun 14:46

Amazon Elastic Transcoder Update

by Jeff Barr

Transcoding is the process of adapting a media file (video or audio) to change the size, format, or other parameters in order to reduce the file size or to make it compatible with a particular type of device. Amazon Elastic Transcoder is a scalable, fully-managed service that works on a cost-effective pay per use model. You don't have to license or install any software and you can take advantage of transcoding presets for a variety of popular output devices and formats. Amazon Elastic Transcoder outputs H.264 and VP8 video and AAC, MP3, and Vorbis audio in a number of package formats including MP4, WebM, MPEG2-TS, MP3, and OGG. Additionally, you may output segmented video files and manifest files to support HLS video streaming.

Enhanced Parallelism
Over the last couple of months we have quietly raised the level of parallelism within the service. If your job includes more than one type of output, work proceeds in parallel, up to the limit of 30 outputs per job. You can have up to four pipelines per AWS account and each pipeline can process between 12 and 20 jobs simultaneously by default (the limit is specific to each Region; check here for the particulars).

Elastic Transcoder has been tuned to minimize queue times, even as queues grow to 100 or more jobs. We have measured a median queue time of 0.6 seconds, with the P90 point at just 3.2 seconds. In other words, processing for 90% of the jobs submitted to Elastic Transcoder begins within 3.2 seconds. As an example, a 5 minute HD transcoding job submitted in the US East (Northern Virginia) Region with an input bitrate of 20 Mbps and multiple output bitrates of 800 Kbps, 1 Mbps, 2 Mbps all completed in about 3 minutes.

Getting Started
All AWS customers have access to 20 minutes of audio transcoding, 20 minutes of SD transcoding, and 10 minutes of HD transcoding at no charge as part of the AWS Free Usage Tier. Read the Elastic Transcoder Developer Guide and you will be up to speed in no time!

-- Jeff;

27 Jun 14:43

New SSD-Backed Elastic Block Storage

by Jeff Barr

Amazon Elastic Block Store (EBS for short) lets you create block storage volumes and attach them to EC2 instances. AWS users enjoy the ability to create EBS volumes that range in size from 1 GB up to 1 TB, create snapshot backups, and to create volumes from snapshots with a couple of clicks, with optional encryption at no extra charge.

We launched EBS in the Summer of 2008 and added the Provisioned IOPS (PIOPS) volume type in 2012. As a quick refresher, IOPS are short for Input/Output Operations per Second. A single EBS volume can be provisioned for up to 4,000 IOPS; multiple PIOPS volumes can be connected together via RAID to support up to 48,000 IOPS (see our documentation on EBS RAID Configuration for more information).

Today we are enhancing EBS with the addition of the new General Purpose (SSD) volume type as our default block storage offering. This new volume type was designed to offer balanced price/performance for a wide variety of workloads (small and medium databases, dev and test, and boot volumes, to name a few), and should be your first choice when creating new volumes. These volumes take advantage of the technology stack that we built to support Provisioned IOPS, and are designed to offer 99.999% availability, as are the existing EBS volume types.

General Purpose (SSD) volumes take advantage of the increasing cost-effectiveness of SSD storage to offer customers 10x more IOPS, 1/10th the latency, and more bandwidth and consistent performance than offerings based on magnetic storage. With a simple pricing structure where you only pay for the storage provisioned (no need to provision IOPS or to factor in the cost of I/O operations), the new volumes are priced as low as $0.10/GB-month.

General Purpose (SSD) volumes are designed to provide more than enough performance for a broad set of workloads all at a low cost. They predictably burst up to 3,000 IOPS, and reliably deliver 3 sustained IOPS for every GB of configured storage. In other words, a 10 GB volume will reliably deliver 30 IOPS and a 100 GB volume will reliably deliver 300 IOPS. There are more details on the mechanics of the burst model below, but most applications won't exceed their burst and actual performance will usually be higher than the baseline. The volumes are designed to deliver the configured level of IOPS performance with 99% consistency.

You can use this new volume type with all of the EBS-Optimized instance types for greater throughput and consistency.

Boot Boost
The new General Purpose (SSD) volumes can enhance the performance and responsiveness of your application in many ways. For example, it has a very measurable impact when booting an operating system on an EC2 instance.

Each newly created SSD-backed volume receives an initial burst allocation that provides up to 3,000 IOPS for 30 minutes. This initial allocation provides for a speedy boot experience for both Linux and Windows, and is more than sufficient for multiple boot cycles, regardless of the operating system that you use on EC2.

Our testing indicates that a typical Linux boot requires about 7,000 I/O operations and a typical Windows boot requires about 70,000. Switching from a Magnetic volume to a General Purpose (SSD) volume of the same size reduces the typical boot time for Windows 2008 R2 by approximately 50%.

If you have been using AWS for a while, you probably know that each EC2 AMI specifies a default EBS volume type, often Magnetic (formerly known as Standard). A different volume type can be specified at instance launch time. The EC2 console makes choosing General Purpose (SSD) volumes in place of the default simple, and you can optionally make this the behavior for all instance launches made from the console.

When you use the console to launch an instance, you have the option to change the default volume type for the boot volume. You can do this for a single launch or for all future launches from the console, as follows (you can also choose to stick with magnetic storage):

If you launch your EC2 instances from the command line, or the EC2 API, you need to specify a different block device mapping in order to use the new volume type. Here's an example of how to do this from the command line via the AWS CLI:

$ aws ec2 run-instances \
  --key-name mykey \
  --security-groups default \
  --instance-type m3.xlarge \
  --image-id ami-60f69f50 \
  --block-device-mappings '[{"DeviceName":"/dev/xvda","Ebs":{"VolumeType":"gp2"}}]' \
  --region us-west-2

To make it easier to get started with General Purpose (SSD) boot volumes when using the command line or EC2 API, versions of the latest Amazon Linux AMI and the Windows Server 2012 R2 Base AMI in English which specify General Purpose (SSD) volumes as the default are now available. To obtain the ID of the latest published General Purpose (SSD) Windows AMI in your region you can use the Get-EC2ImageByName cmdlet as follows:

C:\> Get-EC2ImageByName -Names Windows_Server-2012-R2_RTM-English-64Bit-GP2*

Here are the names and identifiers for the Amazon Linux AMIs:

Region AMI ID Full Name
us-east-1 ami-aaf408c2 amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
us-west-2 ami-8f6815bf amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
us-west-1 ami-e48b8ca1 amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
eu-west-1 ami-dd925baa amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
ap-southeast-1 ami-82d78bd0 amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
ap-southeast-2 ami-91d9bcab amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
ap-northeast-1 ami-df470ede amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
sa-east-1 ami-09cf6014 amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2

We are also working to make it simpler to configure storage for your instance so that you can easily choose storage options for existing EBS-backed AMIs (stay tuned for an update).

Choosing an EBS Volume Type
With today's launch, you can now choose from three distinct types of EBS volumes and might be wondering which one is best for each use case. Here are a few thoughts and guidelines:

  • General Purpose (SSD) - The new volume type is a great fit for small and medium databases (either NoSQL or relational), development and test environments, and (as described above) boot volumes. In general, you should now plan to start with this volume type and move to one of the others only if necessary. You can achieve up to 48,000 IOPS by connecting multiple volumes together using RAID.
  • Provisioned IOPS (SSD) - Volumes of this type are ideal for the most demanding I/O intensive, transactional workloads and large relational or NoSQL databases. This volume type provides the most consistent performance and allows you to provision the exact level of performance you need with the most predictable and consistent performance. With this type of volume you provision exactly what you need, and pay for what you provision. Once again, you can achieve up to 48,000 IOPS by connecting multiple volumes together using RAID.
  • Magnetic - Magnetic volumes (formerly known as Standard volumes) provide the lowest cost per Gigabyte of all Amazon EBS volume types and are ideal for workloads where data is accessed less frequently and cost management is a primary objective.
You can always switch from one volume type to another by creating a snapshot of an existing volume and then creating a new volume of the desired type from the snapshot. You can also use migration commands and tools such as tar, dd or Robocopy.

Under the Hood - Performance Burst Details
Each volume can also provide up to 3,000 IOPS in bursts that can span up to 30 minutes, regardless of the volume size. The burst of IOPS turns out to be a great fit for the use cases that I mentioned above. For example, the IOPS load generated by a typical relational database turns out to be very spiky. Database load and table scan operations require a burst of throughput; other operations are best served by a consistent expectation of low latency. The General Purpose (SSD) volumes are able to satisfy all of the requirements in a cost-effective manner. We have analyzed a wide variety of application workloads and carefully engineered General Purpose (SSD) to take advantage of this spiky behavior with the expectation that they will rarely exhaust their accumulated burst of IOPS.

Within the General Purpose (SSD) implementation is a Token Bucket model that works as follows:

  • Each token represents an "I/O credit" that pays for one read or one write.
  • A bucket is associated with each General Purpose (SSD) volume, and can hold up to 5.4 million tokens.
  • Tokens accumulate at a rate of 3 per configured GB per second, up to the capacity of the bucket.
  • Tokens can be spent at up to 3000 per second per volume.
  • The baseline performance of the volume is equal to the rate at which tokens are accumulated — 3 IOPS per GB per second.

All of this work behind the scenes means that you, the AWS customer, can simply create EBS volumes of the desired size, launch your application, and your I/O to the volumes will proceed as rapidly and efficiently as possible.

OpsWorks and CloudFormation Support
You can create volumes of this type as part of an AWS OpsWorks Layer:

You can also create them from within a CloudFormation template as follows:

...
{
   "Type":"AWS::EC2::Volume",
   "Properties" : {
      "AvailabilityZone" : "us-east-1a",
      "Size" : 100,
      "VolumeType" : "gp2"
   }
}
...

Pricing
The new General Purpose (SSD) volumes are priced at $0.10 / GB / month in the US East (Northern Virginia) Region, with no additional charge for I/O operations. For pricing in other AWS Regions, please take a look at the EBS Pricing page.

We are also announcing that we are reducing the price of IOPS for Provisioned IOPS volumes by 35%. For example, if you create a Provisioned IOPS volume and specify 1,000 IOPS your monthly cost will decline from $100 to $65 per month in the US East (Northern Virginia) Region, with similar reductions in the other Regions. The cost for Provisioned Storage remains unchanged at $0.125 / GB / month.

-- Jeff;

27 Jun 14:40

Process Captions with Amazon Elastic Transcoder

by Jeff Barr

Today we are adding captioning support to Amazon Elastic Transcoder. You can now add, remove, or preserve captions as you transcode videos from one format to another, at no additional charge. Captions provide a transcript of the audio portion of the content, making it accessible to more people. With the passing of the Americans with Disabilities Act in 1990, captioning has become mandatory for certain types of content shown in the United States.

About Captions
I learned quite a bit about captions and captioning as I prepared to write this blog post. The term "closed captioning" refers to captions that must be activated by the viewer. The alternative ("open," "burned-in", or "hard-coded" captions) are visible to everyone. Since the public debut of closed captioning in 1973, a number of different standards have emerged. Originally, caption information for TV shows was concealed within a portion of the broadcast signal (also known as EIA-608, CEA-608, or "line 21 captions"). With the advent of digital television broadcasting, the EIA-708 protocol was introduced. Over the years, each protocol was refined, specialized, extended, and morphed into other protocols including (but most definitely not limited to) TTML, DFXP, EBU-TT, SRC, WebVTT, SCC, and mov-text. To make things even more interesting (and complicated) captions can be either embedded in the video stream or provided in a separate file commonly known as a "sidecar."

If you are a developer, supporting all of these formats would (obviously) be a lot of work. You need to handle the cross-product of input and output formats, and ensure that each one remains functional as you maintain and improve your code. Fortunately, we have taken care of all of that for you and you can focus on building your application!

New Captioning Support
Effective immediately, Elastic Transcoder accepts captions in the following formats:

  • CEA-608
  • CEA-708
  • TTML
  • DFXP
  • EBU-TT
  • SRT
  • WebVTT
  • SCC
  • mov-text

If your input file contains captions for more than one language, you can choose to retain some, none, or all of them.

Elastic Transcoder can generate captions in the following formats (one embedded, and up to four external):

  • DFXP
  • SRT
  • WebVTT
  • SCC
  • mov-text

Working With Captions
You can work with captions from the AWS Management Console, the AWS CLI or the Elastic Transcoder API. Here's a quick tour of the console's support for captions.

Start by enabling Captions in the Available Settings section of the transcoding job:

If your content has captions in one or more sidecar files, click Add Caption Source and add them:

Specify the desired caption format or formats (use Add Caption Format if you want multiple formats). If a format that you select is delivered in the form of a sidecar, you must also specify a filename pattern. The pattern will be used to generate the final file names for the sidecar file or files.

Start Today
This new feature is available now and you can start using it today. There's no extra charge for the use of captions; visit the Elastic Transcoder page to learn more about the service.

-- Jeff;

27 Jun 14:35

Delivery Notifications for Simple Email Service

by Jeff Barr

Amazon Simple Email Service (SES) is a simple, scalable, and cost-effective way to send transactional email, marketing messages, and other types of high-quality content. You can connect your application to Amazon SES using the SES SMTP Interface or a full set of APIs. Either way, you will be able to send your messages with high deliverability, guided by our real-time sending statistics.

Today we are enhancing SES with the addition of delivery notifications. You can now elect to receive an Amazon SNS notification each time SES successfully delivers a message to a recipient's email server. These notifications give you increased visibility into the mail delivery process. With today's release, you can now track deliveries, bounces, and complaints, all via notification to the SNS topic or topics of your choice.

JSON Notifications
Each delivery notification is a JSON object that looks like this:

{"notificationType":"Delivery",

  "mail":{
  "timestamp":"2014-05-28T22:40:59.638Z",
          "messageId":"0000014644fe5ef6-9a483358-9170-4cb4-a269-f5dcdf415321-000000",
          "source":"test@ses-example.com",
  "destination":[
  "success@simulator.amazonses.com",
  "recipient@ses-example.com" ]
  },

  "delivery":{
    "timestamp":"2014-05-28T22:41:01.184Z",
    "recipients":["success@simulator.amazonses.com"],
    "processingTimeMillis":1546,     
        "reportingMTA":"a8-70.smtp-out.amazonses.com",
        "smtpResponse":"250 ok:  Message 64111812 accepted"
  } 
}

You can route notifications to an existing SNS topic, or you can create and dedicate a new one. You can configure notification delivery through the SES Console or through the SetIdentityNotificationTopic API.

Get Notified Now
This new feature is available now and you can start using it today!

-- Jeff;

27 Jun 12:27

A better branches page

by cobyism

Branches are an essential part of collaborating using GitHub Flow. They’ve always been cheap and easy to create within a GitHub repository, and today we’re making branch management more straightforward.

At the top of any repository page, click Branches to see an overview of the branches across your project.

Atom’s branches page

You can quickly filter the branches you’ve created, and see which branches are most active. New sections on the page also make it more obvious how you need to take action on the branches in your repository—whether that’s cleaning up stale branches, examining a branch with a failing test, or sending a pull request for the branch you just pushed.

See the branches you care about

Need more help? See Creating and deleting branches within your repository and Viewing branches in your repository in GitHub Help.

06 May 18:22

AWS CloudTrail Update - Seven New Services & Support From CloudCheckr

by Jeff Barr

AWS CloudTrail records the API calls made in your AWS account and publishes the resulting log files to an Amazon S3 bucket in JSON format, with optional notification to an Amazon SNS topic each time a file is published.

Our customers use the log files generated CloudTrail in many different ways. Popular use cases include operational troubleshooting, analysis of security incidents, and archival for compliance purposes. If you need to meet the requirements posed by ISO 27001, PCI DSS, or FedRAMP, be sure to read our new white paper, Security at Scale: Logging in AWS, to learn more.

Over the course of the last month or so, we have expanded CloudTrail with support for additional AWS services. I would also like to tell you about the work that AWS partner CloudCheckr has done to support CloudTrail.

New Services
At launch time, CloudTrail supported eight AWS services. We have added support for seven additional services over the past month or so. Here's the full list:

 Here's an updated version of the diagram that I published when we launched CloudTrail:

News From CloudCheckr
CloudCheckr (an AWS Partner) integrates with CloudTrail to provide visibility and actionable information for your AWS resources. You can use CloudCheckr to analyze, search, and understand changes to AWS resources and the API activity recorded by CloudTrail.

Let's say that an AWS administrator needs to verify that a particular AWS account is not being accessed from outside a set of dedicated IP addresses. They can open the CloudTrail Events report, select the month of April, and group the results by IP address. This will display the following report:

As you can see, the administrator can use the report to identify all the IP addresses that are being used to access the AWS account. If any of the IP addresses were not on the list, the administrator could dig in further to determine the IAM user name being used, the calls being made, and so forth.

CloudCheckr is available in Freemium and Pro versions. You can try CloudCheckr Pro for 14 days at no charge. At the end of the evaluation period you can upgrade to the Pro version or stay with CloudCheckr Freemium.

-- Jeff;

02 May 23:00

Sovereign: build and maintain your own private cloud

by Jerod Santo

Great idea from Alex Payne:

A set of Ansible playbooks to build and maintain your own private cloud: email, calendar, contacts, file sync, IRC bouncer, VPN, and more.

The project is aptly named Sovereign, and it provides quite the list of services.

I probably won’t use Sovereign to host my personal cloud, but I’m excited to use it as an Ansible-learning resource!


Subscribe to The Changelog Weekly - our free weekly email covering everything that hits our open source radar.
The post Sovereign: build and maintain your own private cloud appeared first on The Changelog.
02 May 23:00

Build beautiful programming books with Git and Markdown

by Jerod Santo

There’s a lot of innovation (and iteration) going on in the online publishing space. GitBook continues that trend by offering a command line tool built specifically for creating programming book and exercises.

You write your book in Markdown and from that GitBook can generate a static website, PDF, eBook, and even JSON. Here’s what the results look like:

GitBook Preview


Subscribe to The Changelog Weekly - our free weekly email covering everything that hits our open source radar.
The post Build beautiful programming books with Git and Markdown appeared first on The Changelog.
02 May 22:59

Gogs is a self-hosted Git service written in Go

by Jerod Santo

Gogs looks like a nice, new (still in Alpha) option if you want to self-host some Git repositories with a web interface similar to GitHub’s.

Gogs

It’s written purely in Go, so installation should be dead simple. From the README:

Gogs only needs one binary to setup your own project hosting on the fly!

Worth a look.


Subscribe to The Changelog Weekly - our free weekly email covering everything that hits our open source radar.
The post Gogs is a self-hosted Git service written in Go appeared first on The Changelog.
02 May 22:57

18 Unforgettable Restaurants with Unique Surroundings

by twistedsifter
ithaa-underwater-restaurant-conrad-maldives-rengali-island-resoirt-3

 

From high in the sky to under the sea, these 18 restaurants with unique surroundings are a feast for the eyes. Whether you’re stepping into the past or gazing as far as the eye can see, these restaurants are sure to provide a memorable dining experience for your next meal.

If you know of any other restaurants with unique surroundings, be sure to let us know in the comments below so we can do a follow-up post!

 

1. Under the Sea
Ithaa Restaurant, Maldives

ithaa underwater restaurant conrad maldives rengali island resoirt 3 18 Unforgettable Restaurants with Unique Surroundings

© 2012 Conrad Hotels & Resorts

 

Located 16 feet (4.87m) below sea level, Ithaa restaurant offers unparalleled 180° views of sea life while you dine. The restaurant is part of the Conrad Maldives Rangali Island resort and is truly one of the most interesting dining experiences in the world.

Ithaa, which means mother-of-pearl in Dhivehi, is a mostly acrylic structure with a 14-person capacity. The space is approximately 5m x 9m (16ft x 30ft) and was designed and constructed by M.J. Murphy. The restaurant officially opened on April 15, 2005. Entrance to Ithaa is by way of a spiral staircase in a thatched pavilion at the end of a jetty. For more information please visit the official site at: Conrad Maldives Rangali Island Resort

 

 

2. Inside a Cave Overlooking the Water
Grotta Palazzese – Polignano a Mare, Italy

restaurant inside a cave cavern itlay grotta palazzese 2 18 Unforgettable Restaurants with Unique Surroundings

Photograph by Grotta Palazzese

 

In the town of Polignano a Mare in southern Italy (province of Bari, Apulia), lies a most intriguing dining experience at the Grotta Palazzese. Open only during the summer months, a restaurant is created inside a vaulted limestone cave, looking outwards toward the sea. The restaurant is part of the Grotta Palazzese hotel located above.

HOTEL RISTORANTE GROTTA PALAZZESE
Via Narciso, 59 – Polignano a Mare (Bari) Puglia
Tel. +39 (0)80 4240677 – Fax +39 (0)80 4240767
Email: grottapalazzese@grottapalazzese.it
Website: http://www.grottapalazzese.it

 

 

3. Surrounded by Snow
SnowRestaurant – Kemi, Finland

snowrestaurant the snowcastle of kemi kemin lumilinna finland

 

SnowRestaurant is located at the LumiLinna Snowcastle in Kemi, Finland. The restaurant is rebuilt annually and typically opens around the end of January for lunch and dinner (daily). The temperature in the restaurant is always around -5 degrees Celsius. Reservations must be made in advance and you can see the menu for the upcoming season here.

 

 

4. Dinner in the Sky
Locations Around the World

restaurant in the sky 18 Unforgettable Restaurants with Unique Surroundings

Photograph by dinnerinthesky.com

 

Dinner in the Sky is hosted at a table suspended at a height of 50 metres by a team of professionals and can be installed anywhere in the world as long as there is a surface of approximately 500 m² that can be secured. It is available for a session of 8 hours and can accommodate up to 22 people around the table at every session with three staff in the middle (chef, waiter, entertainer). To date, there have been Dinner in the Sky events in 40 different countries. Visit the official site for more information.

 

 

5. Surrounded by Mountains
Berggasthaus Aescher – Wasserauen, Switzerland

Berggasthaus Aescher cliff side restaurant switzerland

Photograph via Switzerland Tourism

 

Berggasthaus Aescher is located 1454 meters (4,770 ft) above sea level and is open from 1 May – 31 October annually. The cliffside restaurant rewards hikers with remarkable views of the surrounding Alpstein area in Wasserauen, Switzerland. The vertical cliff face it hangs to is over 100 meters high. There is also a cable car that can take you back down to the valley. For more information visit MySwitzerland.com and TripAdvisor. You can also find the restaurant’s contact information below for any inquiries.

Beny & Claudia Knechtle-Wyss
CH-9057 Weissbad AI
T: 071 799 11 42 oder 071 799 14 49
E-Mail: info@aescher-ai.ch

 

 

6. Restaurant on a Rock
The Rock – Zanzibar, Tanzania

the rock restaurant in zanzibar 18 Unforgettable Restaurants with Unique Surroundings

 

The Rock is located on the south-east of Zanzibar island on the Michamwi Pingwe peninsula (about 45 minutes from Stone Town). The restaurant is situated on a rock not far from shore. Patrons can reach the restaurant by foot during low tide and by boat at high tide (offered by the restaurant). Visit the official site for more information: www.therockrestaurantzanzibar.com/

 

 

7. Inside a Thousand-Year-Old Cistern
Sarnic Restaurant – Istanbul, Turkey

sarnic_restaurant inside a cistern istanbul turkey

Photograph by Sarnic Restaurant

 

A cistern is a waterproof receptacle often built to catch and store rainwater. In Istanbul, Turkey, this thousand-year-old cistern features high domes supported on six stone piers situated at the top of the hill at the end of the row of small hotels in the narrow street immediately behind St. Sophia. It is now the site of a unique restaurant called Sarnic.

Sarnic Restaurant
Address: Sogukcesme Sokagi 34220 Sultanahmet / Istanbul
Phone: 0212 512 42 91 – 513 36 60

 

 

8. Open Air Rooftop Dining
Sirocco @ Lebua Hotel – Bangkok, Thailand

sirocco rooftop restaurant lebua hotel bangkok thailand

Photograph by Lebua Hotels & Resorts

 

Located on the 63rd floor of Lebua Hotel in Bangkok, Thailand; is the award-winning, open air rooftop restaurant, Sirocco. Patrons are graced with incredible views of the bustling city below as they dine on authentic Mediterranean fare. For reservations and additional information, visit the official site.

 

 

9. Tree-Top Dining with Zip-Line Service
Treepod @ Soneva Kiri – Koh Kood, Bangkok

Treepod-dining-experience-at-Soneva-Kiri-Resort-Thailand

Photograph by Soneva Kiri Resort

 

Guests at the Soneva Kiri resort in Koh Kood, Bangkok are invited to experience Treepod dining. Guests are seated in a bamboo pod after being hoisted up into the tropical foliage of Koh Kood’s ancient rainforest. You can gaze out across the boulder-covered shoreline while your food and drink is delivered via the zip-line acrobatics of your personal waiter.

For more information, visit the official site.

 

 

10. Wet Feet and Waterfalls
Labassin Waterfall Restaurant – Philippines

waterfall restaurant villa escudero phillippines 6 18 Unforgettable Restaurants with Unique Surroundings

Photograph via Matador Trips

 

Located at the Villa Escudero Plantations and Resort in the Philippines is the Labassin Waterfall Restaurant. The restaurant is only open for lunch and guests dine from a buffet-style menu and eat at bamboo dining tables. It’s not a natural waterfall but a spillway from the Labasin Dam. The dam’s reservoir has been turned into a lake where visitors can go rafting in traditional bamboo rafts and explore the Filipino culture at shows, facilities and additional restaurants on the sprawling property. Visit Villa Escudero’s official site for more information.

 

 

11. A Reclaimed Bank and a VIP Vault Room
The Bedford – Chicago, United Sates

chicago supper club restaurant reclaims bank with vault the bedford 18 Unforgettable Restaurants with Unique Surroundings

Photograph by The Bedford

 

Located in Chicago’s Wicker Park, The Bedford reclaimed a historic private bank from 1926 and transformed the space into a supper club. The 8,000-square foot (743 sq. m) lower-level interior features terracotta, marble and terrazzo, all reclaimed and restored from the original bank. Inside the VIP vault room, the walls are lined with more than 6,000 working copper lock boxes.

The Bedford – Wicker Park, Chicago
1612 West Division Street (At Ashland Avenue)
Chicago, IL 60622
773-235-8800 Phone

 

 

12. World’s Highest Restaurant
At.mosphere @ Burj Khalifa – Dubai

burj khalifa top floor restaurant atmosphere dubai

 

Sprawling over 1,030 meters on level 122 of the world’s tallest building (at a height of over 442 meters/1,450 ft), and two levels below the At the Top observatory deck, At.mosphere is the holder of the Guinness World Record for the ‘highest restaurant from ground level’.

For reservations and more information call +971 4 888 3444 or visit the official website.

 

 

13. Eating in the Outback
Sounds of Silence @ Ayers Rock Resort – Australia

Voyages-Ayers-Rock-Resort-Sounds-of-Silence

Photograph by Ayers Rock Resort

 

At the Sounds of Silence experience you can dine under the canopy of the desert night. The experience begins with canapés and chilled sparkling wine served on a viewing platform overlooking the Uluru-Kata Tjuta National Park. A bush tucker inspired buffet that incorporates native bush ingredients such as crocodile, kangaroo, barramundi and quandong is offered.

Afterwards, a resident star talker decodes the southern night sky, locating the Southern Cross, the signs of the zodiac, the Milky Way, as well as planets and galaxies that are visible due. Visit the official site for more information.

 

 

14. Luxury Ferris Wheel Dining
Sky Dining @ The Singapore Flyer – Singapore

singapore flyer private dining

 

At a height of 165 meters (541 ft), Singapore Flyer is the world’s largest giant observation wheel offering panorama views of Marina Bay’s skyline with a glimpse of neighbouring Malaysia and Indonesia. The Full Butler Sky Dining experience comes with exclusive Gueridon service, a 4-course menu, wine pairing options, personalized butler service and skyline views in the comfort of a spacious capsule.

For more information visit the official site.

 

 

15. Spiritual Dining
Pitcher & Piano – Nottingham, England

pitcher and piano nottingham united kingdom

 

Set within a deconsecrated church in the Lace Market, Pitcher & Piano Nottingham features stained glass windows, church candles, vintage bric-a-brac and cosy Chesterfields. An all-day food menu with standard pub fare is offered for breakfast, lunch and dinner.

Pitcher & Piano Nottingham
The Unitarian Church, High Pavement, Nottingham, NG1 1HN
0115 958 6081
nottingham@pitcherandpiano.com

 

 

16. Historic Dining
Ristorante Da Pancrazio – Rome, Italy

risorante da pancrazio rome italy

 

Ristorante Da Pancrazio is built over the Theater of Pompey’s ruins, the well known 1st Century B.C. theater where Julius Caesar was murdered. The restaurant has became famous for the unique halls as you dine on traditional Roman cuisine.

Ristorante Da Pancrazio
piazza del Biscione 92
00186 Roma

 

 

17. Precarious Dining Procurement
Fanweng Restaurant – Yichang, China

fangweng hanging restaurant cliffside china 18 Unforgettable Restaurants with Unique Surroundings

 

Located in China’s Hubei province, Fanweng Restaurant is located in the Happy Valley of the Xiling Gorge near the city of Yichang. Carved into a cliff, the restaurant floor hangs several hundred feet above ground, offering views of the Yangtze River below. Only a portion of the dining area is set over the cliff as the rest of the space is set inside a natural cave. For more information visit this blog, or call the restaurant direct at 0717-8862179.

 

 

18. Monumental Dining
58 Tour Eiffel – Paris, France

58 tour eiffel restaurant paris france by d milherou

Photograph by D. Milherou

 

58 Tour Eiffel is an award-winning restaurant located on the first level of the iconic Eiffel Tower. Diners enjoy a stunning view of the Champ de Mars, Les Invalides, Montparnasse Tower, Montmartre and surrounding cityscape of Paris. Fine French cuisine is prepared by chef Alain Soulard. Visit the official site for more information.

 

 

 

If you enjoyed this post, the Sifter
highly recommends:

 

 

furkapassroute in switzerland as seen from grimselpassroute 18 Unforgettable Restaurants with Unique Surroundings

 

 

university club library new york 18 Unforgettable Restaurants with Unique Surroundings

 

 

distant storm cloud seen from airplane window 18 Unforgettable Restaurants with Unique Surroundings

 

02 May 18:17

Wikis: now with more love

by bkeepers

Documenting the code you share on GitHub can contribute tremendously to the success of your project. When your documentation is easy to access and read, people can better understand how to work with your code and how to contribute as collaborators.

Today we're shipping several UI improvements that make it easier to create, edit, and interact with GitHub Wikis. These changes also make wiki content more consistent with other repository features and pave the way for future updates.

wiki

GitHub Wikis now feature:

  • an upgraded sidebar that lists all of the pages in your wiki along with any custom content you'd like to include
  • more consistent rendering of wiki content alongside other markup in a repository
  • emoji :thumbsup:
  • task lists

wiki-task-list

If you haven't yet enabled a wiki for your project, we've published a Guide to help you get started, and have compiled a showcase of projects that have fantastic wikis for inspiration. Need more help? Check out our revamped documentation articles.

01 May 09:30

Updated Services UI

by atmos

A little over 6 years ago we launched github-services as an open source project. We've had a great deal of success with github-services, but the growing number of supported services make it difficult to identify the ones that are important to you.

Today we're introducing a more streamlined way of managing services. With the new changes you'll only see the services that you install, and services are searchable from an auto-completer or by scrolling.

services-in-action

01 May 09:28

Domain Name Health Checks for Route 53

by Jeff Barr

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.

Last year we introduced Route 53 health checks. You can configure these health checks to route traffic to a backup website in the event that your primary website fails to respond as expected. We have since enhanced the basic health check model with the addition of string matching, support for HTTPS, and fast interval checks with configurable failover.

Today we are enhancing Route 53's health check model with support for domain name health checks. This new option is an alternative to the existing support for health checks that are directed to a specific IP address.

You can use these health checks along with Route 53’s DNS failover feature to help with improving the availability of your entire application by automatically routing requests only to healthy endpoints. For example, for a high-availability database scenario, you can create health checks against your primary and secondary database endpoints such as db-primary-1234.us-west-2.rds.amazonaws.com and db-secondary-1234.us-west-2.rds.amazonaws.com, even though they may have changing IP addresses (for services like Amazon RDS, the IP addresses can and do change, so it’s important to use the DNS name to define these endpoints instead of using the endpoint’s the current IP address). You can then create CNAMEs for db.example.com that point to your primary and secondary endpoints, enable the ‘Failover’ routing policy and associate these CNAME records with the health checks. Your application layer would connect to db.example.com for database access and Route 53’s health checks and DNS failover will automatically route requests from your application layer to the right database instance based on their health. Here's a diagram to show you how this all fits together in practice:

You can configure this new type of health check from the AWS Management Console, the AWS CLI, or the Route 53 APIs.

From the AWS Management Console, select the Domain Name option for the endpoint, and then enter the domain name to be checked:

The DNS is re-resolved (in other words, the domain name is translated to an IP address) every time Route 53 performs a health check. The default interval between health checks is 30 seconds unless you have enabled fast interval health checks; in that case the interval is 10 seconds.

For both type of endpoints, Route 53 performs the health checks from multiple locations. Each location does its own DNS resolution; if the name being checked is using latency-based routing or is part of a content delivery network (CDN) Route 53 will check different endpoints as appropriate. This will give you a more accurate indication of the overall global health and accessibility of your application.

This new feature is available now and you can start using it today!

-- Jeff;

01 May 09:12

Programming Sucks

___E___very friend I have with a job that involves picking up something heavier than a laptop more than twice a week eventually finds a way to slip something like this into conversation: "Bro,((It always starts with "Bro")) you don't work hard. I just worked a 4700-hour week digging a tunnel under ...
29 Apr 16:55

Task lists in all markdown documents

by raganwald

Task lists in issues, comments, and pull request descriptions are incredibly useful for project coordination and keeping track of important items. Starting today, we are adding read-only task lists to all Markdown documents in repositories and wikis. So now, when you write:

### Solar System Exploration, 1950s – 1960s

- [ ] Mercury
- [x] Venus
- [x] Earth (Orbit/Moon)
- [x] Mars
- [ ] Jupiter
- [ ] Saturn
- [ ] Uranus
- [ ] Neptune
- [ ] Comet Haley

It will render like this, everywhere on GitHub:

Solar System Exploration, 1950s – 1960s

Edit the document or wiki page and use the - [ ] and - [x] syntax to update your task list.

28 Apr 21:07

Important Change - Managing Your AWS Secret Access Keys

by AWS Evangelist

Last month I urged you to download your secret access key(s) for your AWS (root) account in advance of a planned change in our access model.

We have implemented the change and you can no longer retrieve existing secret access keys for the root account. If you lose your secret access key, you must generate a new access key (an access key ID and a secret access key).

Now is a great time to make a commitment to follow our best practices and create an IAM user that has access keys, instead of relying on root access keys. Using IAM will allow you to set up fine-grained control over access to your AWS resources.

-- Jeff;