My two cents about software development on the web

Cloud Platforms

Warning: file_put_contents(C:\hostingspaces\neetpiqc\\wwwroot/wp-content/cache/0bac12a71080e2f0238f1b173488ab6f.spc) [function.file-put-contents]: failed to open stream: Permission denied in C:\hostingspaces\neetpiqc\\wwwroot\wp-includes\class-simplepie.php on line 8680

Warning: C:\hostingspaces\neetpiqc\\wwwroot/wp-content/cache is not writeable. Make sure you've set the correct relative or absolute path, and that the location is server-writable. in C:\hostingspaces\neetpiqc\\wwwroot\wp-includes\class-simplepie.php on line 1781

Warning: file_put_contents(C:\hostingspaces\neetpiqc\\wwwroot/wp-content/cache/ad31bb8bf02bbf58903d4663ca4ee574.spc) [function.file-put-contents]: failed to open stream: Permission denied in C:\hostingspaces\neetpiqc\\wwwroot\wp-includes\class-simplepie.php on line 8680

Warning: C:\hostingspaces\neetpiqc\\wwwroot/wp-content/cache is not writeable. Make sure you've set the correct relative or absolute path, and that the location is server-writable. in C:\hostingspaces\neetpiqc\\wwwroot\wp-includes\class-simplepie.php on line 1781
  • Windows Server 2012 R2, IIS 8.5, WebSockets and .NET 4.5.2 (AppHarbor Blog)

    During the last couple of weeks we've upgraded worker servers in the US and EU regions to support Windows Server 2012 R2, IIS 8.5 and .NET 4.5.2. Major upgrades like this can be a risky and lead to compatibility issues, and the upgrade was carefully planned and executed to maximize compatibility with running applications. Application performance and error rates have been closely monitored throughout the process and fortunately, chances are you haven't noticed a thing: We've detected migration-related issues with less than 0.1% of running applications.

    Many of the new features and configuration improvements enabled by this upgrade will be gradually introduced over the coming months. This way we can ensure a continued painless migration and maintain compatibility with the previous Windows Server 2008 R2/IIS 7.5 setup, while we iron out any unexpected kinks if and when they crop up. A few changes have however already been deployed that we wanted to fill you in on.

    WebSocket support and the beta region

    Last year the beta region featuring experimental WS2012 and WebSockets support was introduced. The beta region allowed customers to test existing and new apps on the new setup while we prepared and optimized it for production use. This approach has been an important factor in learning about subtle differences between the server versions, and addressing pretty much all compatibility issues before upgrading the production regions. Thanks to all the customers who provided valuable feedback during the beta and helped ensure a smoother transition for everyone.

    An important reason for the server upgrade was to support WebSocket connections. Now that the worker servers are running WS2012 and IIS 8.5 we've started doing just that. Applications in the old beta region have been merged into the production US region and the beta region is no longer available when you create a new application.

    Most load balancers already support WebSockets and the upgrade is currently being rolled out to remaining load balancers. Apps created since August 14th fully support WebSockets and no configuration is necessary: AppHarbor will simply detect and proxy connections as expected when a client requests a Connection: Upgrade.

    Some libraries, such as SignalR, will automatically detect and prefer WebSocket connections when supported by both the server and client. Until WebSocket connections are supported on all load balancers some apps may attempt and fail during the WebSocket handshake. This should not cause issues since these libraries will fall back to other supported transports, and affected apps will automatically be WebSocket-enabled when supported by the load balancers.

    CPU throttling

    One of the major challenges that has held back this upgrade is a change in the way we throttle worker CPU usage. CPU limitations are the same as before, but the change can affect how certain CPU-intensive tasks are executed. Resources and documentation on this subject are limited, but testing shows that CPU time is more evenly scheduled across threads, leading to higher concurrency, consistency and stability within processes. While this is overall an improvement it can also affect peak performance on individual threads, and we're currently investigating various approaches to better support workloads affected by this.

    For the curious, we previously used a CPU rate limit registry setting to limit CPU usage per user account, but this is no longer supported on Windows Server 2012. We now use a combination of IIS 8's built-in CPU throttling and a new CPU rate control for job objects to throttle background workers.

    If you've experienced any issues with this upgrade or have feedback about the process, please don't hesitate to reach out.

  • Heartbleed Security Update (AppHarbor Blog)

    Updated on April 10, 2014 with further precautionary steps in the "What can you do" section below.

    On April 7, 2014, a serious vulnerability in the OpenSSL library (CVE-2014-0160) was publicly disclosed. OpenSSL is a cryptography library used for the majority of private communications across the internet.

    The vulnerability, nicknamed "Heartbleed", would allow an attacker to steal secret certificates keys, names and passwords of users and other secrets encrypted using the OpenSSL library. As such it represents a major risk for a large number of internet application and services, including AppHarbor.

    What has AppHarbor done about this

    AppHarbor responded to the announcement by immediately taking steps to remediate the vulnerability:

    1. We updated all affected components with the updated, secure version of OpenSSL within the first few hours of the announcement. This included SSL endpoints and load balancers, as well as other infrastructure components used internally at AppHarbor.
    2. We re-keyed and redeployed all potentially affected AppHarbor SSL certificates (including the piggyback * certificate), and the old certificates are being revoked.
    3. We notified customers with custom SSL certificates last night, so they could take steps to re-key and reissue certificates, and have the old ones revoked.
    4. We reset internal credentials and passwords.
    5. User session cookies were revoked, requiring all users to sign in again.

    Furthermore, AppHarbor validates session cookies against your previously known IP addresses as part of the authorization process. This has reduced the risk of a stolen session cookie being abused. Perfect forward secrecy was deployed to some load balancers, making it impossible to read intercepted and encrypted communication with stolen keys. Forward secrecy has since been deployed to all load balancers hosted by AppHarbor.

    What can you do

    We have found no indication that the vulnerability was used to attack AppHarbor. By quickly responding to the issue and taking the steps mentioned above we effectively stopped any further risk of exposure. However, due to the nature of this bug, we recommend users who want to be extra cautious to take the following steps:

    1. Reset your AppHarbor password.
    2. Review the sign-in and activity history on your user page for any suspicious activity.
    3. Revoke authorizations for external applications that integrates with AppHarbor.
    4. Recreate, reissue and reinstall custom SSL certificates you may have installed, and revoke the old ones. Doing this may revoke the old certificates, so make sure you're ready to install the new certificates.
    5. Read the details about the Heartbleed bug here and assess the risks relative to your content.

    Updated instructions (April 10, 2014):

    While we still have not seen any abuse on AppHarbor as a result of this bug, we now also encourage you to take these precautionary steps:

    1. Reset your build URL token.
    2. If you're using one of the SQL Server or MySQL add-ons: Reset the database password. Go to the add-on's admin page and click the "Reset Password" button. This will immediately update the configuration on AppHarbor and redeploy the application (with a short period of downtime until it is redeployed).
    3. If you're using the Memcacher add-on: Reinstall the add-on by uninstalling and installing it.
    4. Rotate/update sensitive information in your own configuration variables.

    If you have hardcoded passwords/connection strings for any your add-ons this is a good opportunity to start using the injected configuration variables. You can find instructions for the SQL add-ons here and the Memcacher add-on here. This way your application is automatically updated when you reset the add-ons, or when an add-on provider updates the configuration. If this is not an option you should immediately update your code/configuration files and redeploy the application after the configuration is updated.

    Stay tuned

    Protecting your code and data is our top priority, and we continue to remediate and asses the risks in response to this issue. We'll keep you posted with any new developments, so stay tuned on Twitter and the blog for important updates. We're of course also standing by on the support forums if you have any questions or concerns.

  • Librato integration and built-in perfomance metrics (AppHarbor Blog)

    Librato Dashboard

    Being able to monitor and analyze key application metrics is an essential part of developing stable, performant and high quality web services that meets you business requirements. Today we’re announcing a great new set of features to provide a turnkey solution for visualizing, analyzing and acting on key performance metrics. On top of that we’re enabling you to easily track your own operational metrics. In this blog post we’ll look at how the pieces tie together.

    Librato integration

    The best part of today’s release is our new integration with Librato for monitoring and analyzing metrics. Librato is an awesome and incredibly useful service that enables you to easily visualize and correlate metrics, including the new log-based performance metrics provided by AppHarbor (described in more details below).

    Librato Dashboard

    Librato is now available as an add-on and integrates seamlessly with your AppHarbor logs. When you provision the add-on, Librato will setup a preconfigured dashboard tailored for displaying AppHarbor performance data and you can access it immediately by going to the Librato admin page. Everything will work out of the box without any further configuration and your logs will automatically be sent to Librato using a log drain.

    When log messages containing metric data are sent to Librato they’re transformed by an l2met service before being sent to their regular API. A very cool feature of the l2met service is that it can automatically calculate some useful metrics. For instance, it’ll calculate the median response time as well as the the 99th and 95th percentile of measurements such as response times. The perc99 response time means the response time of the 99% fastest responses. It can be useful to know this value since it's less affected by a few very slow responses than the average. Among other things this provides a good measurement of the browsing experience for most of your users.

    Librato Dashboard

    The l2met project was started by Ryan Smith - a big shout-out and thanks to him and the Librato team for developing this great tool.

    For more information about how to integrate with Librato and details about the service please refer to the documentation here. Also check out their announcement blog post about the integration.

    Built-in performance metrics

    AppHarbor can now write key runtime performance metrics directly to your application’s log stream as l2met 2.0 formatted messages similar to this:

    source=web.5 sample#memory.private_bytes=701091840
    source=web.5 sample#process.handles=2597
    source=web.5 sample#cpu.load_average=1.97

    These are the messages Librato uses as well and most of them are written every 20 seconds. They allow for real-time monitoring of worker-specific runtime metrics such as CPU (load average) and memory usage, as well as measurements of response time and size reported from the load balancers. Because these metrics are logged to your log stream you can also consume them in the same way you’d usually view or integrate with your logs.

    Load average run-time metrics

    Performance data collection takes place completely out-of-process, without using a profiler, and it can be enabled and disabled without redeploying the application. This means that monitoring won’t impact application performance at all and that a profiler (such as New Relic) can still be attached to the application.

    Writing custom metrics

    The performance data provided by AppHarbor is probably not the only metrics you want to track. You can of course integrate directly with Librato’s API, but the l2met integration makes it easier than ever to track your own metrics, and the paid Librato plans includes the ability to track custom metrics exactly for that purpose.

    You can start writing your own metrics simply by sending an l2met-formatted string to your logs. Last week we introduced the Trace Logging feature which is perfect for this, so writing your custom metrics can now be done with a simple trace:


    To make this even easier we’ve built the metric-reporter library (a .NET port of Librato’s log-reporter) to provide an easy to use interface for writing metrics to your log stream. You can install it with NuGet:

    Install-Package MetricReporter

    Then initialize a MetricReporter which writes to a text writer:

    var writer = new L2MetWriter(new TraceTextWriter);
    var reporter = new MetricReporter(metricWriter);

    And start tracking your own custom metrics:

    reporter.Measure("payload.size", 21276);
    reporter.Measure("twitter.lookup.time", () =>
        //Do work

    On Librato you can then view charts with these new metrics along with the performance metrics provided by AppHarbor, and add them to your dashboards, aggregate and correlate data, set up alerts etc. The MetricReporter library will take care of writing l2met-formatted metrics using the appropriate metric types and write to the trace or another IO stream. Make sure to inspect the README for more examples and information on configuration and usage.

    That’s all we have for today. There’ll be more examples on how you can use these new features soon, but for now we encourage you to take it for a spin, install the Librato add-on and test the waters for yourself. We’d love to hear what you think so if there are other metrics you’d like to see or if you experience any issues please hit us up through the usual channels.

  • Introducing Trace Logging (AppHarbor Blog)

    Today we’re happy to introduce trace message integration with your application log. With tracing you can very easily log trace messages to your application's log stream by using the built-in tracing capabilities of the .NET framework from anywhere in your application.

    When introducing the realtime logging module a while back we opened up access to collated log data from load balancers, the build and deploy infrastructure, background workers and more. Notably missing however was the ability to log from web workers. We’re closing that gap with tracing, which can be used in both background and web workers.

    How to use it

    The trace feature integrates with standard .NET tracing, so you don’t have to make any changes to your application to use it. You can simply log traces from your workers with the System.Diagnostics.Trace class:

    Trace.TraceInformation("Hello world");

    This will yield a log message containing a timestamp and the source of the trace in your application’s log like so:

    2014-01-22T06:46:48.086+00:00 app web.1 Hello World

    You can also use a TraceSource by specifying the trace source name AppHarborTraceSource:

    var traceSource = new TraceSource("AppHarborTraceSource", defaultLevel: SourceLevels.All);
    traceSource.TraceEvent(TraceEventType.Critical, 0, "Foo");

    You may not always want noisy trace messages in your logs and you can configure the trace level on the "Logging" page. There are 4 levels: All, Warning, Error and None. Setting the trace level will update the configuration without redeploying or restarting the application. This is often desirable if you need to turn on tracing when debugging and diagnosing an ongoing or state-related issue.

    Configure Trace level

    There are a number of other ways to use the new tracing feature including:

    • ASP.NET health monitoring (for logging exceptions, application lifecycle events etc).
    • A logging library such as NLog (Trace target) or log4net (TraceAppender).
    • Integrating with ETW (Event Tracing for Windows) directly using the injected event provider id.

    Anything that integrates with .NET tracing or ETW should work, and you can find more details and examples in this knowledge base article.

    All new applications have tracing enabled by default. Tracing can be enabled for existing applications on the "Logging" page.

    How does it work

    Under the hood we’re using ETW for delivering log messages to the components that are responsible for sending traces to your log stream. Application performance is unaffected by the delivery of log messages as this takes place completely out of process. Note however that messages are buffered for about a second and that some messages may be dropped if you’re writing excessively to the trace output.

    When tracing is enabled, AppHarbor configures your application with an EventProviderTraceListener as a default trace listener. While you can integrate directly with ETW as well we recommend using the Trace or TraceSource approaches described above.

    Viewing trace messages

    Traces are collated with other logging sources in your log stream, so you can consume them in the same way you’re used to. You can view log messages using the command line interface, the web viewer or set up a log drain to any HTTP, HTTPS or syslog endpoint. For more information about the various integration points please refer to this article.

    Viewing trace messages in console

    We’ve got a couple of cool features that builds on this ready soon, so stay tuned and happy tracing!

  • .NET 4.5.1 is ready (AppHarbor Blog)

    Microsoft released .NET 4.5.1 a while back, bringing a bunch of performance improvements and new features to the framework. Check out the announcement for the details.

    Over the past few weeks we have updated our build infrastructure and application servers to support this release. We're happy to report that AppHarbor now supports building, testing and running applications targeting the .NET 4.5.1 framework, as well as solutions created with Visual Studio 2013 and ASP.NET MVC 5 applications.

    There are no known issues related to this release. If you encounter problems, please refer to the usual support channels and we'll help you out.

    .NET logo

  • Integrated NuGet Package Restore (AppHarbor Blog)

    A few months ago the NuGet team released NuGet 2.7, which introduced a new approach to package restore. We recently updated the AppHarbor build process to adopt this approach and integrate the new NuGet restore command. AppHarbor will now automatically invoke package restore before building your solution.

    Automatically restoring packages is a recommended practice, especially because you don’t have to commit the packages to your repository and can keep the footprint small. Until now we’ve recommended using the approach desribed in this blog post to restore NuGet packages when building your application. This has worked relatively well, but it’s also a bit of a hack and has a few caveats:

    • Some NuGet packages rely files that needs to be present and imported when MSBuild is invoked. This has most notably been an issue for applications relying on the Microsoft.Bcl.Build package for the reasons outlined in this article.
    • NuGet.exe has to be committed and maintained with the repository and project and solution files needs to be configured.
    • Package restore can intermittently fail in some cases when multiple projects are built concurrently.

    With this release we expect to eliminate these issues and provide a more stable, efficient and streamlined way of handling package restore.

    If necessary, NuGet can be configured by adding a NuGet.config file in the same directory as your solution file (or alternatively in a .nuget folder under your solution directory). You usually don't have to configure anything if you’re only using the official NuGet feed, but you’ll need to configure your application if it relies on other package sources. You can find an example configuration file which adds a private package source in the knowledge base article about package restore and further documentation for NuGet configuration files can be found here.

    If you hit any snags we’re always happy to help on our support forums.

    NuGet logo

  • New Relic Improves Service and Reduces Price (AppHarbor Blog)

    New Relic

    We're happy to announce that New Relic has dropped the price of the Professional add-on plan from $45/month to $19/month per worker unit. Over the years New Relic has proven to be a really useful tool for many of our customers, and we're pleased that this price drop will make the features of New Relic Professional more accessible to everyone using AppHarbor.

    Highlights of the Professional plan include:

    • Unlimited data retention
    • Real User Monitoring (RUM) and browser transaction tracing
    • Application transaction tracing, including Key Transactions and Cross Application Tracing
    • Advanced SQL and slow SQL analysis

    You can find more information about the benefits of New Relic Pro on the New Relic website (

    Service update

    The New Relic agent was recently upgraded to a newer version which brings support for some recently introduced features as well as a bunch of bug fixes. Time spent in the request queue is now reported and exposed directly in the New Relic interface. Requests are rarely queued for longer than a few milliseconds, but it can happen if your workers are under load. When more time is spent in the request queue it may be an indicator that you need to scale your application to handle the load efficiently.

    We're also making a few changes to the way the New Relic profiler is initialized with your applications. This is particularly relevant if you've subscribed to New Relic directly rather than installing the add-on through AppHarbor. Going forward you'll need to add a NewRelic.LicenseKey configuration variable to make sure the profiler is attached to your application. We recommend that you make this change as soon as possible. If you're subscribed to the add-on through AppHarbor no action is required and the service will continue to work as usual.

  • Found Elasticsearch add-on available (AppHarbor Blog)

    Found ElasticSearch

    Found provides fully hosted and managed Elasticsearch clusters; each cluster has reserved memory and storage ensuring predictable performance. The HTTPS API is developer-friendly and existing Elasticsearch libraries such as NEST, Tire, PyES and others work out of the box. The Elasticsearch API is unmodified, so for those with an existing Elasticsearch integration it is easy to get started.

    For production and mission critical environments customers can opt for replication and automatic failover to a secondary site, protecting the cluster against unplanned downtime. Security has a strong focus: communication to and from the service is securely transmitted over HTTPS (SSL) and data is stored behind multiple firewalls and proxies. Clusters run in isolated containers (LXC) and customisable ACLs allow for restricting access to trusted people and hosts.

    In the event of a datacenter failure, search clusters are automatically failed over to a working datacenter or, in case of a catastrophic event, completely rebuilt from backup.

    Co-founder Alex Brasetvik says: "Found provides a solution for companies who are keen to use Elasticsearch but not overly keen to spend their time and money on herding servers! We provide our customers with complete cluster control: they can scale their clusters up or down at any time, according to their immediate needs. It's effortless and there's zero downtime."

    More information and price plans are available on the add-on page.

  • Introducing Realtime Logging (AppHarbor Blog)

    Today we're incredibly excited to announce the public beta of our brand new logging module. Starting immediately all new applications created on AppHarbor will have logging enabled. You can enable it for your existing apps on the new "Logging" page.

    We know all too well that running applications on a PaaS like AppHarbor sometimes can feel like a black box. So far we haven't had a unified, simple and efficient way to collate, present and distribute log events from the platform and your apps.

    That's exactly what we wanted to address with our logging solution, and based on the amazing feedback from private beta users we feel confident that you'll find it useful for getting insight about your application and AppHarbor. A big thanks to all the beta testers who have helped us refine and test these new features.

    The new logging module collates log messages from multiple sources, including almost all AppHarbor infrastructure component and your applications - API changes, load balancer request logs, build, deploy and stdout/stderr from your background workers and more can now be accessed and sent to external services in real time.

    Captain's log Consider yourself lucky we're not that much into skeuomorphism


    We're providing two interfaces "out of the box" - a convenient web-interface can be accessed on the Logging page and a new log command has been added to the CLI. [Get the installer directly from here or install with Chocolatey cinst appharborcli.install. To start a "tailing" log session with the CLI, you can for instance run appharbor log -t -s appharbor. Type appharbor log -h to see all options. Log web interface

    The web interface works a bit differently, but try it out and let us know what you think - it's heavily inspired by the project who have built a great client side interface for viewing, filtering, searching and splitting logs into multiple "screens".

    Log web interface


    One of the most useful and interesting aspects of today's release is the flexible integration points it provides. Providing access to your logs in realtime is one thing, but AppHarbor will only store the last 1500 log messages for your application. Storing, searching, viewing and indexing logs can be fairly complex and luckily many services already exists that helps you make more sense of your log data.

    We've worked with Logentries to provide a completely automated and convenient way for sending AppHarbor logs to them when you add their add-on. When you add the Logentries add-on your application can automatically be configured to send logs to Logentries, and Logentries will be configured to display log messages in AppHarbor's format.

    Logentries integration

    You can also configure any syslog (TCP), HTTP and HTTPS endpoint you like with log "drains". You can use this to integrate with services like Loggly and Splunk, or even your own syslog server or HTTP service. More details about log drains are available in the this knowledge base article and the drain API documentation.

    Finally there's a new new Log session API endpoint that you can use to create sessions similar to the ones used by the interfaces we provide.


    If you've ever used Heroku you'll find most of these features very familiar. That's no coincidence - the backend is based on Heroku's awesome distributed syslog router, Logplex. Integrating with Logplex makes it a lot easier for add-on providers who already support Heroku's Logplex to integrate with AppHarbor, while giving us a scalable and proven logging backend to support thousands of deployed apps.

    Logplex is also in rapid, active development, and a big shout-out to the awesome people at Heroku who are building this incredibly elegant solution. If you're interested in learning more about Logplex we encourage you to check out the project on Github and try it for yourself. We've built a client library for interacting with Logplex's HTTP API and HTTP log endpoints from .NET apps - let us know if you'd like to use this and we'll be happy to open source the code. The Logplex documentation on stream management is also useful for a high-level overview of how Logplex works.

    Next steps

    With this release we've greatly improved the logging experience for our customers. We're releasing this public beta since we know it'll be useful to many of you as it is, but we're by no means finished. We want to add even more log sources, provide more information from the various infrastructure components and integrate with more add-on providers. Also note that request logs are currently only available on shared load balancers, but it will be rolled out to all load balancers soon. If you find yourself wanting some log data that is not currently available please let us know. We now have a solid foundation to provide you with the information you need when you need it, and we couldn't be more excited about that.

    We'll provide you with some examples and more documentation for these new features over the next couple of weeks, but for now we hope you'll take it for a spin and test the waters for yourself. Have fun!

  • Introducing PageSpeed optimizations (AppHarbor Blog)

    Today we've introducing a new experimental feature: Google PageSpeed optimizations support. The PageSpeed module is a suite of tools that tries to optimize web page latency and bandwidth usage of your websites by rewriting your content to implement web performance best practices. Reducing the number of requests to a single domain, optimizing cache policies and compressing content can significantly improve web performance and lead to a better user experience.

    With PageSpeed optimization filters we're making it easier to apply some of these best practices, and provide a solution that efficiently and effortlessly speed up your web apps. The optimizations takes place at the load balancer level and works for all web applications no matter what framework or language you use.

    As an example of how this works you can inspect the HTML and resources of this blog to see some of the optimizations that are applied. Analyzing with the online PageSpeed insights tool yields a "PageSpeed score" of 88 when enabled versus 73 when disabled. Not too bad considering it only took a click to enable it.

    PageSpeed button

    You can enable PageSpeed optimizations for your web application on the new "Labs" page, which can be found in the application navigation bar. The application will be configured with PageSpeed's core set of filters within a few seconds. We will then, among other things, apply these filters to your content:

    When you've enabled PageSpeed we recommend that you test the application to make sure it doesn't break anything. You can also inspect the returned content in your browser and if you hit any snags simply disable PageSpeed and let support know about it. Note that only content transferred over HTTP from your domain will be processed by PageSpeed filters. To optimize HTTPS traffic you can enable SPDY support (although that is currently only enabled on dedicated load balancers and in the beta region).

    We'll make more filters available later on, but for the beta we're starting out with a curated set of core filters, which are considered safe for most web applications. There are a few other cool filters we'll add support for later on - such as automatic sprite image generation and lazy-loading of images. Let us know if there are any filters in the catalog you think we should support!

  • Evolution of the Heroku CLI: 2008-2017 (Heroku)
    15 Aug 2017 15:45

    Over the past decade, millions of developers have interacted with the Heroku CLI. In those 10 years, the CLI has gone through many changes. We've changed languages several times; redesigned the plugin architecture; and improved test coverage and the test framework. What follows is the story of our team's journey to build and maintain the Heroku CLI from the early days of Heroku to today.

    1. Ruby (CLI v1-v3)
    2. Go/Node (CLI v4)
    3. Go/Node (CLI v5)
    4. Pure Node (CLI v6)
    5. What's Next?

    Ruby (CLI v1-v3)

    Our original CLI (v1-v3) was written in Ruby and served us well for many years. Ruby is a great, expressive language for building CLIs, however, we started experiencing enough problems that we knew it was time to start thinking about some major changes for the next version.

    For example, the v3 CLI performed at about half the speed on Windows as it did on Unix. It was also difficult to keep the Ruby environment for a user's application separate from the one used by the CLI. A user may be working on a legacy Ruby 1.8.7 application with gems specific to Ruby 1.8.7. These must not conflict with the Ruby version and gem versions the CLI uses. For this reason, commands like heroku local (which came later) would have been hard to implement.

    However, we liked the plugin framework of the v3 CLI. Plugins provide a way for us to nurse new features, test them first internally and then later in private and public beta. Not only does this allow us to write experimental code that we don't have to ship to all users, but also, since the CLI is an open-source project, we sometimes don't want to expose products we're just getting started on (or that are experimental) in a public repository. A new CLI not only needed to provide a plugin framework like v3, but also it was something we wanted to expand on as well.

    Another reason we needed to rewrite the CLI was to move to Heroku's API v3. At the start of this project, we knew that the old API would be deprecated within a few years, so we wanted to kill two birds with one stone by moving to the new API as we rewrote the CLI.

    Go/Node (CLI v4)

    When we started planning for v4, we originally wanted the entire CLI to be written in Go. An experimental CLI was even done before I started at the company to rebuild the CLI in Go called hk. hk was a major departure from the existing CLI that not only changed all the internals, but changed all the commands and IO as well.

    Parity with CLI v3

    We couldn't realistically see a major switch to a new CLI that didn't keep at least a very similar command syntax. CLIs are not like web interfaces, and we learned this the hard way. On the web you can move a button around, and users won't have much trouble seeing where it went. Renaming a CLI command is a different matter. This was incredibly disruptive to users. We never want users to go through frustration like that again. Continuing to use existing syntax and output was a major goal of this project and all future changes to the CLI.

    While we were changing things, we identified some commands that we felt needed work with their input or output. For example, the output of heroku addons changed significantly using a new table output. We were careful to display deprecation warnings on significant changes, though. This is when we first started using color heavily in the CLI. We disable color when the output is not a tty to avoid any issues with parsing the CLI output. We also added a --json option to many commands to make it easier to script the CLI with jq.

    No Runtime Dependency

    In v3, ensuring that we had a Ruby binary that didn't conflict with anything on the user's machine on all platforms was a big headache. The way it was done before also did not allow us to update Ruby without installing a new CLI (so we would've been stuck with Ruby 1.9 forever). We wanted to ensure that the new CLI didn't have a runtime dependency so that we could write code in whatever version of Ruby we wanted to without worrying about compatibility.

    So Why Not All Go?

    You might still be wondering why we didn’t reimplement both the plugins and core in Go (but maintain the same command syntax) to obviate our runtime dependency concerns. As I mentioned, originally we did want to write the CLI in Go as it provided extremely fast single-file binaries with no runtime dependency. However, we had trouble reconciling this with the goal of the plugin interface. At the time, Go provided no support for dynamic libraries and even today this capability is extremely limited. We considered an approach where plugins would be a set of compiled binaries that could be written in any language, but this didn't provide a strong interface to the CLI. It also begged the question of where they would get compiled for all the architectures.

    Node.js for Plugins and Improved Plugin Architecture

    This was when we started to think about Node as the implementation language for plugins. The goal was for the core CLI (written in Go) to download Node just to run plugins and to keep this Node separate from any Node binary on the machine. This kept the runtime dependency to a minimum.

    Additionally, we wanted plugins to be able to have their own dependencies (library not runtime). Ruby made this hard as it's very difficult to have multiple versions of the same gem installed. If we ever wanted to update a gem in v3, we had to go out of our way to fix every plugin in the ecosystem to work with the new version. This made updating any dependencies difficult. It also didn't allow plugins to specify their own dependencies. For example, the heroku-redis plugin needs a redis dependency that the rest of the CLI doesn't need.

    We also wanted to improve the plugin integration process. In v3, when we wanted the functionality from a plugin to go into the core of the CLI, it was a manual step that involved moving the commands and code into the core of the CLI and then deprecating the old plugin. It was fraught with errors and we often had issues come up attempting this. Issues were compounded because it usually wasn't done by a CLI engineer. It was done by a member on another team that was usually moving a plugin for their first time.

    Ultimately we decided to flip this approach on its head. Rather than figure out an easy way to migrate plugin commands into the core, we made the CLI a collection of core plugins. In other words, a plugin could be developed on its own and installed as a “user plugin”, then when we wanted to deliver it to all users and have it come preinstalled, we simply declared it as a “core plugin”. No modifications to the plugin itself would be required.

    This model provided another benefit. The CLI is now a modular set of plugins where each plugin could potentially be maintained by a separate team. The CLI provides an interface that plugins must meet, but outside of that, individual teams can build their own plugins the way they want without impacting the rest of the codebase.

    Allowing these kinds of differences in plugins is actually really powerful. It has allowed developers on other teams and other companies to provide us with clever ideas about how to build plugins. We've continually been able to make improvements to the plugin syntax and conventions by allowing other developers the ability to write things differently as long as they implemented the interface.

    Slow Migration

    One thing I've learned from doing similar migrations on back-end web services is that it's always easier to migrate something bit-by-bit rather than doing a full-scale replacement. The CLI is a huge project with lots of moving parts. Doing a full-scale replacement would have been a 1+ year project and would have involved a painful QA process while we validated the new CLI.

    Instead, we decided to migrate each command individually. We started out by writing a small core CLI with just a few lesser-used commands and migrating them from the v3 CLI to the v4 CLI one at a time. Moving slow allowed us to identify issues with specific commands (whether it was an issue with the core of the CLI, the command itself, or using the new API). This minimized effort on our part and user impact by allowing us to quickly jump on issues related to command conversion.

    We knew this project would likely take 2 years or longer when we started. This wasn't our only task during this time though, so it enabled us to make continual progress while also working on other things. Over the course of the project, we sometimes spent more time on command conversion, sometimes less. Whatever made sense for us at the time.

    The only real drawback with this approach was user confusion. Seeing two versions of the CLI listed when running heroku version was odd and it also wasn't clear where the code lived for the CLI.

    We enabled the gradual migration from v3 to v4 by first having v3 download v4, if it did not exist, into a dotfile of the user's home directory. v4 provides a hidden command heroku commands --json that outputs all the information about every command including the help. When v3 starts, it runs this command so that it knows what commands it needs to proxy to v4 as well as what the full combined help is for both v3 and v4.

    For 2 years we shipped our v4 Go/Node CLI alongside v3. We converted commands one by one until everything was converted.

    Go/Node (CLI v5)

    The v5 release of the CLI was more of an incremental change. Users would occasionally see issues with v4 when first running the CLI because it had trouble downloading Node or the core plugins. v5 was a change from downloading Node when the CLI was first run, to including Node in the initial tarball so it would be available when the CLI first loaded. Another change was that instead of running npm install to install the core plugins on first run, we included all the core plugins' Node files with the initial tarball and kept the user plugins separate.

    Ruby to Node Command Conversion Complete

    In December 2016, we finally finished converting all the commands into the new plugins-based CLI. At this point we modified our installers to no longer include the v3 CLI and the shim that launched the v4 or v5 CLI. Existing users with the CLI already installed as of this time will still be using the v3 CLI because we can't auto-update all parts of the CLI, but new installers will not include v3 and are fully migrated to v5 (or now, v6). If you still have the Ruby CLI installed (you’ll know if you run ‘heroku version’ and see v3.x.x mentioned), you’ll benefit from a slight speed improvement by installing the current version of the CLI to get rid of these old v3 components.

    Pure Node (CLI v6)

    In April 2017 we released the next big iteration of the CLI, v6. This included a number of advantages with a lighter and generic core written only in Node that could be used as a template for building other CLIs, and a new command syntax.

    Leaving Go

    While at Heroku we use Go heavily on the server-side with great success, Go did not work out well for us as a CLI language due to a number of issues. OS updates would cause networking issues and cross-compiling would cause issues where linking to native objects did not work. Go is also a relatively low-level language which increased the time to write new functionality. We were also writing very similar, if not exactly the same, code in Ruby and Node so we could directly compare how difficult it was to write the same functionality in multiple languages.

    We had long felt that the CLI should be written in pure Node. In addition to only having one language used and fewer of the issues we had writing the CLI in Go, it also would allow for more communication between plugins and the core. In v4 and v5, the CLI started a new Node process every time it wanted to request something from a plugin or command (which takes a few hundred ms). Writing the CLI entirely in Node would keep everything loaded in a single process. Among other things, this allowed us to design a dynamic autocomplete feature we had long wanted.


    Occasionally we would be asked how other people could take advantage of the CLI codebase for their own use — not just to extend the Heroku CLI, but to write entirely new CLIs themselves. Unfortunately the Node/Go CLI was complicated for a few reasons: it had a complex Makefile to build both languages and the plugins, it was designed to work both standalone as well as inside v3, and there was quite a bit of “special” functionality that only worked with Heroku commands. (A good example is the --app flag). We wanted a general solution to allow other potential CLI writers to be able to have custom functionality like this as well.

    CLI v6 is built on a platform we call cli-engine. It's not something that is quite ready for public use just yet, but the code is open sourced if you'd like to take a peek and see how it works. Expect to hear more about this soon when we launch examples and documentation around its use.

    New Plugin Interface

    Due to the changes needed to support much of the new functionality in CLI v6, we knew that we would have to significantly change the way plugins were written. Rather than look at this as a challenge, we considered it an opportunity to make improvements with new JavaScript syntax.

    The main change was moving from the old JavaScript object commands into ES2015 (ES6) class-based commands.

    // v5
    const cli = require('heroku-cli-util')
    const co = require('co')
    function * run (context, heroku) {
      let user = context.flags.user || 'world'
      cli.log(`hello ${user}`)
    module.exports = {
      topic: 'hello',
      command: 'world',
      flags: [
        { name: 'user', hasValue: true, description: 'who to say hello to' }
      run: co.wrap(cli.command(run))
    // v6
    import {Command, flags} from 'cli-engine-heroku'
    export default class HelloCommand extends Command {
        static topic = 'hello'
        static command = 'world'
        static flags = {
          user: flags.string({ description: 'who to say hello to' })
        async run () {
          let user = this.flags.user || 'world'
          this.out.log(`hello ${user}`)


    async/await finally landed in Node 7 while we were building CLI v6. We had been anticipating this since we began the project by using co. Switching to async/await is largely a drop-in replacement:

    // co
    const co = require('co')
    let run = co.wrap(function * () {
      let apps = yield heroku.get('/apps')
    // async/await
    async function {
      let apps = await heroku.get('/apps')

    The only downside of moving away from co is that it offered some parallelization tricks using arrays of promises or objects of promises. We have to fall back to using Promise.all() now:

    // co
    let run = co.wrap(function * () {
      let apps = yield {
        a: heroku.get('/apps/appa'),
        b: heroku.get('/apps/appb')
    // async/await
    async function run () {
      let apps = await Promise.all([

    It's not a major drawback, but it does make the code slightly more complicated. Not having to use a dependency and the semantic benefits of using async/await far outweigh this drawback.


    The CLI is now written with Flow. This static type checker makes plugin development much easier as it can enable text editors to provide powerful code autocomplete and syntax checking, verifying that the plugin interface is used correctly. It makes plugins more resilient to change by providing interfaces checked with static analysis.

    2017-07-20 09

    While learning new tools is a challenge when writing code, we've found that with Flow the difficulty was all in writing the core of the CLI and not as much in plugin writing. Writing plugins involves using existing types and functions so often plugins won't have any type definitions at all, where the core has many. This means we as the CLI engineers have done the hard work to include the static analysis, but plugin developers reap the benefits of having their code checked without having to learn much of a new tool if any.


    Class properties and Flow required us to use Babel in order to preprocess the code. Because the process for developing plugins requires you to “link” plugins into the CLI, this allowed us to check if the code had any changes before running the plugin. This means that we can use Babel without requiring a “watch” process to build the code. It happens automatically and there is no need to setup Babel or anything else. All you need is the CLI to develop plugins. (Note that Node must be installed for testing plugins, but it isn't needed to run a plugin in dev mode.)

    Improved Testing

    Testing is crucial to a large, heavily-used CLI. Making changes in the core of the CLI can have unexpected impact so providing good test coverage and making tests easy to write well is very important. We've seen what common patterns are useful in writing tests and iterated on them to make them concise and simple.

    As part of the new plugin interface, we've also done some work to make testing better. There were some gaps in our coverage before where we would have common issues. We worked hard to fill those gaps, ensuring our tests guaranteed commands were properly implementing the plugin interface while keeping the tests as simple as possible to write. Here is what they look like in comparison from v5 of the CLI to v6:

    // v5 mocha test: ./test/commands/hello.js
    const cli = require('heroku-cli-util')
    const expect = require('chai').expect
    describe('hello:world', function () {
      beforeEach(() => {
      it('says hello to a user', function () {
        return{flags: {user: 'jeff'}})
          .then(() => expect(cli.stdout).to.equal('hello jeff!\n'))
    // v6 jest test: ./src/commands/hello.test.js
    import Hello from './hello'
    describe('hello:world', () => {
      it('says hello to a user', async () => {
        let {stdout} = await Hello.mock('--user', 'jeff')
        expect(stdout).toEqual('hello jeff!\n')

    The syntax is almost identical, but we're using Jest in v6 and Mocha in v5. Jest comes preloaded with a mocking framework and expectation framework so there is much less to configure than with mocha.

    The v6 tests also run the flag parser which is why '--user', 'jeff' has to be passed in. This avoids a common issue with writing v5 tests where you could write a test that works but not include the flag on the command definition. Also, if there is any quirk or change with the parser, we'll be able to catch it in the test since it's running the same parser.

    What's Next?

    With these changes in place, we've built a foundation for the CLI that's already been successful for several projects at Heroku. It empowers teams to quickly build new functionality that is well tested, easy to maintain, and has solid test coverage. In addition, with our CLI Style Guide and common UI components, we're able to deliver a consistent interface.

    In the near future, expect to see more work done to build more interactive interfaces that take advantage of what is possible in a CLI. We're also planning on helping others build similar CLIs both through releasing cli-engine as a general purpose CLI framework, but also through guidelines taken from our Style Guide that we feel all CLIs should strive to meet.

  • Heroku Postgres Update: Configuration, Credentials, and CI (Heroku)
    08 Aug 2017 15:13

    At the core of Heroku’s data services sits Postgres, and today, we are making it even easier to bend Heroku Postgres to the very unique needs of your application’s stack. With these new features, you can easily customize Postgres, making it more powerful and configurable, while retaining all the automation and management capabilities of Heroku Postgres you know and love. By changing Postgres settings, creating and working with database credentials, and providing tight integrations to Heroku and Heroku CI, you now have the ability to further tune your Postgres database to your team’s needs.

    More Flexible Postgres with PGSettings

    As we start peeling back the layers of Heroku Postgres, the ability to change the default behavior is the first step in making Heroku Postgres more flexible. Using the Heroku CLI, any developer can use the PGSettings feature to change portions of the default Heroku Postgres configuration. One of the more acute areas to change behavior is around database logging. As Heroku Postgres databases start getting sufficiently large, this could be in terms of the number of transactions, data volumes, or connections, these databases are generating large amount of logs that could hamper performance of the database.

    $ heroku pg:settings postgresql-large-1234 -a sushi
    === postgresql-large-1234
    log-lock-waits:             true
    log-min-duration-statement: 2000
    log-statement:              ddl

    With log-statements, for example, you can change this setting to none and the database won’t log any information besides slow queries and errors. For databases on Heroku Postgres that experience a large amount of thrashing in the schema, for example, temp tables that come and go, this can save lots of space and increase performance of the system by not having to write so many log statements.

    $ heroku pg:settings:log-statement ddl -a sushi
    log-statement has been set to ddl for postgresql-large-1234.
    All data definition statements, such as CREATE, ALTER and DROP, will be logged in your application's logs.

    PGSettings is available for Heroku Postgres databases running on a current production plan, Standard, Premium, Private, or Shield, on Postgres version 9.6 or above.

    Manage Access Permissions with Heroku Postgres Credentials

    One of the benefits of Heroku Postgres is how the relationship between a database and its associated application are automatically maintained. When a database needs to be re-instantiated, or its credentials have changed, these values are automatically reflected to the application.

    With the advent of sharable add-ons (and databases), a single database can now be used by multiple applications. With Heroku Postgres Credentials, each of those applications can now have a unique connection to each database, so the same automated credential management can now carry across multiple applications.

    In addition, as these credentials are scoped to a Heroku application, this feature provides an easy and powerful way to manage the scoping of database access to discrete groups of users in a way that’s tightly integrated to the existing Heroku permissions model. For example, if your team has a data scientist or analyst, a best practice would be to create a read-only credential and give that individual access via that scoped credential which they can retrieve by accessing the environment information for a specific application. This way, the risk to changing the database has been mitigated.

    Credentials can be created via the Heroku Data dashboard or the Heroku CLI, and are available for all non-legacy production plans, Standard, Premium, Private and Shield, on Postgres 9.6 and above.


    Tight Integration With Heroku CI and Pipelines

    You can now auto-provision Heroku Postgres instances for all your Heroku CI test runs and Heroku Pipelines Review apps with zero configuration. This allows Heroku Postgres users seamless, automated environment management for testing the code of any pull request, Git merge, or arbitrary build against a free, disposable Heroku Postgres hobby instance. You can optionally populate these instances with test data in the "Release Phase" script fully support by Heroku Pipelines. Each ephemeral Heroku Postgres DB is designed to instantiate (and self-destroy) quickly and cleanly -- and you pay only for the dynos during duration of the test run, or the existence of the Review app. As always, progress and results are reported to your development team in Slack, in GitHub, and your Heroku Dashboard.

    Integrating all the bits!

    All of these features - PGSettings, Heroku Postgres Credentials and Integration with Heroku CI and Pipelines - are available today. We are exploring new ways to make Postgres even more versatile and powerful, and if you have any new settings you’d like to see exposed, or any ideas on integrations with Heroku, email us at

  • Announcing Heroku ChatOps for Slack (Heroku)
    25 Jul 2017 15:37

    Today we’re making our Slack integration generally available to all Heroku customers through the release of Heroku ChatOps.

    ChatOps is transforming the way dev teams work, replacing the asynchronous communication and context-switching of traditional operations processes with a shared conversational environment so teams can stay focused, communicate in real-time, gain visibility, and speed joint decision making.

    Having seen the benefits of Slack integration for managing our own apps, we wanted to make ChatOps easier to use and accessible to every dev team. Heroku ChatOps handles the complexity of user onboarding, authentication, and accountability between Slack & Heroku, and provides users with an intuitive slash command interface and curated Slack notifications to improve your team’s efficiency and transparency. Heroku ChatOps is easy to set up and works out of the box with just a simple click installation.

    Our initial release supports the integration of Heroku’s popular Pipelines continuous delivery workflow with Slack. This means you can deploy and promote pipeline apps, and keep informed of your team’s releases and integration test status without ever leaving Slack.

    Getting Started

    We made ChatOps simple to install using a one-click command from the Dev Center page. Team members can then seamlessly authenticate from Slack with their GitHub and Heroku accounts via OAuth.


    Heroku Flow and Team Ready

    We designed ChatOps with teams and Heroku Flow -- a visual, easy to use workflow for continuous delivery -- in mind. All of your team’s Heroku Pipelines are visible and available for managing from within your team’s Slack channel. ChatOps allows your team to have shared visibility and collaborate effectively on continuous delivery workflows from within Slack.

    Proactive Notifications

    ChatOps unifies monitoring of your continuous delivery workflow by creating proactive notifications in Slack for Heroku Pipelines associated events like pull request openings, Heroku CI activity, or Dashboard-initiated deployments and promotions. No more looking through multiple emails, activity logs, or forwarding notifications -- everyone sees the same thing immediately in the same place.

    Slash Commands for Collaborative Deployment

    ChatOps brings the deployment processes that are happening behind the scenes on a single engineer’s laptop to the forefront, using a set of slash commands to help manage the delivery of your Heroku applications from inside of Slack. You can deploy to apps in any stage of your team’s pipeline or promote to production directly from within Slack.

    Output from deployment and pipeline promotion commands are organized into threads to keep things tidy. ChatOps will alert you if a required deployment check failed with a user-friendly error message. You also have the option of ignoring the check failure and forcing a deploy. Pipeline configuration details and release history are also readily available.

    Learn More

    Installation instructions and the full list of commands are available in our Dev Center documentation. More details how we built our Slack integration are available on the Slack Platform Blog. If you have feedback you can reach us We plan to support other Heroku integrations in the future, and we welcome your suggestions.

  • Using Heroku's Expensive Query Dashboard to Speed up your App (Heroku)
    11 Jul 2017 15:39

    I recently demonstrated how you can use Rack Mini Profiler to find and fix slow queries. It’s a valuable tool for well-trafficked pages, but sometimes the slowdown is happening on a page you don't visit often, or in a worker task that isn't visible via Rack Mini Profiler. How can you find and fix those slow queries?

    Heroku has a feature called expensive queries that can help you out. It shows historical performance data about the queries running on your database: most time consuming, most frequently invoked, slowest execution time, and slowest I/O.


    Recently, I used this feature to identify and address some slow queries for a site I run on Heroku named CodeTriage (the best way to get started contributing to open source). Looking at the expensive queries data for CodeTriage, I saw this:

    Code Triage Project Expensive Query Screenshot

    On the right is the query, on the left are two graphs; one graph showing the number of times the query was called, and another beneath that showing the average time it took to execute the query. You can see from the bottom graph that the average execution time can be up to 8 seconds, yikes! Ideally, I want my response time averages to be around 50 ms and perc 95 to be sub-second time, so waiting 8 seconds for a single query to finish isn't good.

    To find this on your own apps you can follow directions on the expensive queries documentation. The documentation will direct you to your database list page where you can select the database you’d like to optimize. From there, scroll down and find the expensive queries near the bottom.

    Once you've chosen a slow query, you’ll need to determine why it's slow. To accomplish this use EXPLAIN ANALYZE:

    issuetriage::DATABASE-> SELECT "issues".*
    issuetriage::DATABASE-> FROM "issues"
    issuetriage::DATABASE-> WHERE "issues"."repo_id" = 2151
    issuetriage::DATABASE->         AND "issues"."state" = 'open'
    issuetriage::DATABASE-> ORDER BY  created_at DESC LIMIT 20 OFFSET 0;
                                                                           QUERY PLAN
    Limit  (cost=27359.98..27359.99 rows=20 width=1232) (actual time=82.800..82.802 rows=20 loops=1)
       ->  Sort  (cost=27359.98..27362.20 rows=4437 width=1232) (actual time=82.800..82.801 rows=20 loops=1)
             Sort Key: created_at
             Sort Method: top-N heapsort  Memory: 31kB
             ->  Bitmap Heap Scan on issues  (cost=3319.34..27336.37 rows=4437 width=1232) (actual time=27.725..81.220 rows=5067 loops=1)
                   Recheck Cond: (repo_id = 2151)
                   Filter: ((state)::text = 'open'::text)
                   Rows Removed by Filter: 13817
                   ->  Bitmap Index Scan on index_issues_on_repo_id  (cost=0.00..3319.12 rows=20674 width=0) (actual time=24.293..24.293 rows=21945 loops=1)
                         Index Cond: (repo_id = 2151)
    Total runtime: 82.885 ms

    In this case, I'm using Kubernetes because they currently have the highest issue count, so querying on that page will likely give me the worst performance.

    We see the total time spent was 82 ms, which isn't bad for one of the "slowest" queries, but we've seen that some can be way worse. Most single queries should be aiming for around a 1 ms query time.

    We see that before the query can be made it has to sort the data, this is because we are using an order on an offset clause. Sorting is a very expensive operation, you can see that it says the "actual time" can take between 27.725 ms and 81.220 ms just to sort the data, which is pretty tough. If we can get rid of this sort then we can drastically improve our query.

    One way to do this is... you guessed it, add an index. Unlike last week though, the issues table is HUGE. While the table we indexed last week only had around 2K entries, each of those entries can have a virtually unbounded number of issues. In the case of Kubernetes there are 5K+ issues, and that's only the state=open ones. The closed issue count is much larger than that, and it will only grow over time. We want to be mindful of taking up too much database size, so instead of indexing ALL the data, we can instead apply a partial index. I'm almost never querying for state=closed when it comes to issues, so we can ignore those while building our index. Here's the migration I used to add a partial index:

    class AddCreatedAtIndexToIssues < ActiveRecord::Migration[5.1]
      def change
        add_index :issues, :created_at, where: "state = 'open'"

    What's the result of adding this index? Let's look at that same query we analyzed before:

    issuetriage::DATABASE-> SELECT "issues".*
    issuetriage::DATABASE-> FROM "issues"
    issuetriage::DATABASE-> WHERE "issues"."repo_id" = 2151
    issuetriage::DATABASE->         AND "issues"."state" = 'open'
    issuetriage::DATABASE-> ORDER BY  created_at DESC LIMIT 20 OFFSET 0;
                                                                             QUERY PLAN
    Limit  (cost=0.08..316.09 rows=20 width=1232) (actual time=0.169..0.242 rows=20 loops=1)
       ->  Index Scan Backward using index_issues_on_created_at on issues  (cost=0.08..70152.26 rows=4440 width=1232) (actual time=0.167..0.239 rows=20 loops=1)
             Filter: (repo_id = 2151)
             Rows Removed by Filter: 217
    Total runtime: 0.273 ms

    Wow, from 80+ ms to less than half a millisecond. That's some improvement. The index keeps our data already sorted, so we don't have to re-sort it on every query. All elements in the index are guaranteed to be state=open so the database doesn't have to do more work there. The database can simply scan the index removing elements where repo_id is not matching our target.

    For this case it is EXTREMELY fast, but can you imagine a case where it isn't so fast?

    Perhaps you noticed that we still have to iterate over issues until we're able to find ones matching a given Repo ID. I'm guessing that since this repo has the most issues, it's able to easily find 20 issues with state=open. What if we pick a different repo?

    I looked up the oldest open issue and found it in Journey. Journey has an ID of 10 in the database. If we do the same query and look at Journey:

    issuetriage::DATABASE-> SELECT "issues".*
    issuetriage::DATABASE-> FROM "issues"
    issuetriage::DATABASE-> WHERE "issues"."repo_id" = 10
    issuetriage::DATABASE->         AND "issues"."state" = 'open'
    issuetriage::DATABASE-> ORDER BY  created_at DESC LIMIT 20 OFFSET 0;
                                                                         QUERY PLAN
     Limit  (cost=757.18..757.19 rows=20 width=1232) (actual time=21.109..21.110 rows=6 loops=1)
       ->  Sort  (cost=757.18..757.20 rows=50 width=1232) (actual time=21.108..21.109 rows=6 loops=1)
             Sort Key: created_at
             Sort Method: quicksort  Memory: 26kB
             ->  Index Scan using index_issues_on_repo_id on issues  (cost=0.11..756.91 rows=50 width=1232) (actual time=11.221..21.088 rows=6 loops=1)
                   Index Cond: (repo_id = 10)
                   Filter: ((state)::text = 'open'::text)
                   Rows Removed by Filter: 14
     Total runtime: 21.140 ms

    Yikes. Previously we're only using 0.27 ms, now we're back up to 21 ms. This might not have been the "8 second" query we were seeing before, but it's definitely slower than the first query we profiled.

    Even though we've got an index on created_at Postgres has decided not to use it. It's reverting back to a sorting algorithm and using an index on repo_id to pull the data. Once it has issues then it iterates over each to remove where the state is not open.

    In this case, there are only 20 total issues for Journey, so grabbing all the issues and iterating and sorting manually was deemed to be faster. Does this mean our index is worthless? Well considering this repo only has 1 subscriber, it's not the case we need to be optimizing for. Also if lots of people visit that page (maybe because of this article), then Postgres will speed up the query by using the cache. The second time I ran the exact same explain query, it was much faster:

     Total runtime: 0.092 ms

    Postgres already had everything it needed in the cache. Does this mean we're totally out of the woods then? Going back to my expensive queries page after a few days, I saw that my 8 second worst case is gone, but I still have a 2 second query every now and then.

    Expensive Queries Screenshot 2

    This is still a 75% performance increase (in worst case performance) so the index is still useful. One really useful feature of Postgres is the ability to combine multiple indexes. In this case, even though we have an index on created_at and an index on repo_id, Postgres does not seem to think it's faster to combine the two and use that result. To fix this issue we can add an index that has both created_at and repo_id, which maybe I'll explore in the future.

    Before we go, I want to circle back to how we found our slow query test case. I had to know a bit about the data and make some assumptions about the worst case scenarios. I had to guess that Kubernetes was our worst offender, which ended up not being true. Is there a better way than guess and check?

    It turns out that Heroku will output slow queries into your app's logs. Unlike the expensive queries, these logs also contain the parameters used in the query, and not just the query. If you have a logging addon such as Papertrail, you can search your logs for duration and get a result like this:

    Jun 26 06:36:54 issuetriage app/postgres.29339:  [DATABASE] [39-1] LOG:  duration: 3040.545 ms  execute <unnamed>: SELECT COUNT(*) FROM "issues" WHERE "issues"."repo_id" = $1 AND "issues"."state" = $2 
    Jun 26 06:36:54 issuetriage app/postgres.29339:  [DATABASE] [39-2] DETAIL:  parameters: $1 = '696', $2 = 'open' 
    Jun 26 08:26:25 issuetriage app/postgres.29339:  [DATABASE] [40-1] LOG:  duration: 9087.165 ms  execute <unnamed>: SELECT COUNT(*) FROM "issues" WHERE "issues"."repo_id" = $1 AND "issues"."state" = $2 
    Jun 26 08:26:25 issuetriage app/postgres.29339:  [DATABASE] [40-2] DETAIL:  parameters: $1 = '1245', $2 = 'open' 
    Jun 26 08:49:40 issuetriage app/postgres.29339:  [DATABASE] [41-1] LOG:  duration: 2406.615 ms  execute <unnamed>: SELECT  "issues".* FROM "issues" WHERE "issues"."repo_id" = $1 AND "issues"."state" = $2 ORDER BY created_at DESC LIMIT $3 OFFSET $4 
    Jun 26 08:49:40 issuetriage app/postgres.29339:  [DATABASE] [41-2] DETAIL:  parameters: $1 = '1348', $2 = 'open', $3 = '20', $4 = '760' 

    In this case, we can see that our 2.4 second query (the last query in the logs above) is using a repo id of 1348 and an offset of 760, which brings up another important point. As the offset goes up, the cost of scanning our index will also go up, so it turns out that we had a worse case than my initial guess (Kubernetes) and my second guess (Journey). It is likely that this repo has lots of issues that are old, and this query isn't made often, so that the data is not in cache. By using the logs we can find the exact worst case scenario without all the guessing.

    Before you start writing that comment message, yes, I know that offset pagination is broken and there are other ways to paginate. I may start to look at alternative pagination options, or even getting rid of some of the pagination on the site altogether.

    I did go back and add an index to both the created_at and repo_id columns. With the addition of those two indexes my "worst case" of 2.4 seconds is now down to 14 ms:

    issuetriage::DATABASE=> EXPLAIN ANALYZE SELECT  "issues".*
    issuetriage::DATABASE-> FROM "issues"
    issuetriage::DATABASE-> WHERE "issues"."repo_id" = 1348
    issuetriage::DATABASE-> AND "issues"."state" = 'open'
    issuetriage::DATABASE-> ORDER BY created_at DESC
    issuetriage::DATABASE-> LIMIT 20 OFFSET 760;
                                                                                    QUERY PLAN
     Limit  (cost=1380.73..1417.06 rows=20 width=1232) (actual time=14.515..14.614 rows=20 loops=1)
       ->  Index Scan Backward using index_issues_on_repo_id_and_created_at on issues  (cost=0.08..2329.02 rows=1282 width=1232) (actual time=0.061..14.564 rows=780 loops=1)
             Index Cond: (repo_id = 1348)
     Total runtime: 14.659 ms
    (4 rows)

    Here you can see that we're able to use our new index directly and find only the issues that are open and belonging to a specific repo id.

    What did I learn from this experiment?

    • You can find slow queries using Heroku's expensive queries feature.
    • The exact arguments matter a lot when profiling queries. Don't assume that you know the most expensive thing your database is doing, use metrics.
    • You can find the exact parameters that go with those expensive queries by grepping your logs for the exact parameters of those queries.
    • Indexes help a ton, but you have to understand the different ways your application will use them. It's not enough to profile with 1 query before and after, you need to profile a few different queries with different performance characteristics. In my case not only did I add an index, I went back to the expensive index page which let me know that my queries were still taking a long time (~2 seconds).
    • Performance tuning isn't about magic fixes, it's about finding a toolchain you understand, and iterating on a process until you get the results you want.

    Richard Schneeman is an Engineer for Heroku who also writes posts on his own blog. If you liked this post, you can subscribe to his mailing list to get more like it for free.

  • On the Rise of Kotlin (Heroku)
    20 Jun 2017 15:27

    It’s rare when a highly structured language with fairly strict syntax sparks emotions of joy and delight. But Kotlin, which is statically typed and compiled like other less friendly languages, delivers a developer experience that thousands of mobile and web programmers are falling in love with.

    The designers of Kotlin, who have years of experience with developer tooling (IntelliJ and other IDEs), created a language with very specific developer-oriented requirements. They wanted a modern syntax, fast compile times, and advanced concurrency constructs while taking advantage of the robust performance and reliability of the JVM. The result, Kotlin 1.0, was released in February 2016 and its trajectory since then has been remarkable. Google recently announced official support for Kotlin on Android, and many server-side technologies have introduced Kotlin as a feature.

    The Spring community announced support for Kotlin in Spring Framework 5.0 last month and the Vert.x web server has worked with Kotlin for over a year. Kotlin integrates with most existing web applications and frameworks out-of-the-box because it's fully interoperable with Java, making it easy to use your favorite libraries and tools.

    But ultimately, Kotlin is winning developers over because it’s a great language. Let’s take a look at why it makes us so happy.

    A Quick Look at Kotlin

    The first thing you’ll notice about Kotlin is how streamlined it is compared to Java. Its syntax borrows from languages like Groovy and Scala, which reduce boilerplate by making semicolons optional as statement terminators, simplifying for loops, and adding support for string templating among other things. A simple example in Kotlin is adding two numbers inside of a string like this:

    val sum: String = "sum of $a and $b is ${a + b}"

    The val keyword is a feature borrowed from Scala. It defines an read-only variable, which in this case is explicitly typed as a String. But Kotlin can also infer that type. For example, you could write:

    val x = 5

    In this case, the type Int is inferred by the compiler. That’s not to say the type is dynamic though. Kotlin is statically typed, but it uses type inference to reduce boilerplate.

    Like many of the JVM languages it borrows from, Kotlin makes it easier to use functions and lambdas. For example, you can filter a list by passing it an anonymous function as a predicate:

    val positives = list.filter { it > 0 }

    The it variable in the function body references the first argument to the function by convention. This is borrowed from Groovy, and eliminates the boilerplate of defining parameters.

    You can also define named functions with the fun keyword. The following example creates a function with default arguments, another great Kotlin feature that cleans up your code:

    fun printName(name: String = "John Doe") {

    But Kotlin does more than borrow from other languages. It introduces new capabilities that other JVM languages lack. Most notable are null safety and coroutines.

    Null safety means that a Kotlin variable cannot be set to null unless it is explicitly defined as a nullable variable. For example, the following code would generate a compiler error:

    val message: String = null

    But if you add a ? to the type, it becomes nullable. Thus, the following code is valid to the compiler:

    val message: String? = null

    Null safety is a small but powerful feature that prevents numerous runtime errors in your applications.

    Coroutines, on the other hand, are more than just syntactic sugar. Coroutines are chunks of code that can be suspended to prevent blocking a thread of execution, which greatly simplifies asynchronous programming.

    For example, the following program starts 100,000 coroutines using the launch function. The body of the coroutine can be paused at a suspension point so the main thread of execution can perform some other work while it waits:

    fun main(args: Array<String>) = runBlocking<Unit> {
      var number = 0
      val random = Random()
      val jobs = List(100_000) {
        launch(CommonPool) {
          number += random.nextInt(100)
      jobs.forEach { it.join() }
      println("The answer is: $number")

    The suspension point is the delay call. Otherwise, the function simply calculates some random number and renders it.

    Coroutines are still an experimental feature in Kotlin 1.1, but early adopters can use them in their applications today.

    Despite all of these great examples, the most important feature of Kotlin is its ability to integrate seamlessly with Java. You can mix Kotlin code into an application that’s already based on Java, and you can consume Java APIs from Kotlin with ease, which smooths the transition and provides a solid foundation.

    Kotlin Sits on the Shoulders of Giants

    Behind every successful technology is a strong ecosystem. Without the right tools and community, a new programming language will never achieve the uptake required to become a success. That’s why it’s so important that Kotlin is built into the Java ecosystem rather than outside of it.

    Kotlin works seamlessly with Maven and Gradle, which are two of the most reliable and mature build tools in the industry. Unlike other programming languages that attempted to separate from the JVM ecosystem by reinventing dependency management, Kotlin is leveraging the virtues of Java for it's tooling. There are attempts to create Kotlin-based build tools, which would be a great addition to the Kotlin ecosystem, but they aren't a prerequisite for being productive with the language.

    Kotlin also works seamlessly with popular JVM web frameworks like Spring and Vert.x. You can even create a new Kotlin-based Spring Boot application from the Spring Initializer web app. There has been a huge increase in adoption of Kotlin for apps generated this way.

    Kotlin has great IDE support too, thanks to it's creators. The best way to learn Kotlin is by pasting some Java code into IntelliJ and allowing the IDE to convert it to Kotlin code for you. All of these pieces come together to make a recipe for success. Kotlin is poised to attract both new and old Java developers because it's built on solid ground.

    If you want to see how well Kotlin fits into existing Java tooling, try deploying a sample Kotlin application on Heroku using our Getting Started with Kotlin guide. If you're familiar with Heroku, you'll notice that it looks a lot like deploying any other Java-based application on our platform, which helps make the learning curve for Kotlin relatively flat. But why should you learn Kotlin?

    Why Kotlin?

    Heroku already supports five JVM languages that cover nearly every programming language paradigm in existence. Do we need another JVM Language? Yes. We need Kotlin as an alternative to Java just as we needed Java as an alternative to C twenty years ago. Our existing JVM languages are great, but none of them have demonstrated the potential to become the de facto language of choice for a large percentage of JVM developers.

    Kotlin has learned from the JVM languages that preceded it and borrowed the best parts from those ecosystems. The result is a well round, powerful, and production-ready platform for your apps.

  • Habits of a Happy Node Hacker 2017 (Heroku)
    14 Jun 2017 15:50

    It’s been a little over a year since our last Happy Node Hackers post, and even in such a short time much has changed and some powerful new tools have been released. The Node.js ecosystem continues to mature and new best practices have emerged.

    Here are 8 habits for happy Node hackers updated for 2017. They're specifically for app developers, rather than module authors, since those groups have different goals and constraints:

    1. Lock Down Your Dependency Tree

    In modern Node applications, your code is often only the tip of an iceberg. Even a small application could have thousands of lines of JavaScript hidden in node_modules. If your application specifies exact dependencies in package.json, the libraries you depend on probably don’t. Over time, you'll get slightly different code for each install, leading to unpredictability and potentially introducing bugs.

    In the past year Facebook surprised the Node world when it announced Yarn, a new package manager that let you use npm's vast registry of nearly half a million modules and featured a lockfile that saves the exact version of every module in your dependency tree. This means that you can be confident that the exact same code will be downloaded every time you deploy your application.

    Not to be outdone, npm released a new version with a lockfile of its own. Oh, and it's a lot faster now too. This means that whichever modern package manager you choose, you'll see a big improvement in install times and fewer errors in production.

    To get started with Yarn, install it and run yarn in your application’s directory. This will install your dependencies and generate a yarn.lock file which tells Heroku to use Yarn when building your application.

    To use npm 5, update locally by running npm install -g npm@latest and reinstall your application's dependencies by running rm -rf node_modules && npm install. The generated package-lock.json will let Heroku know to use npm 5 to install your modules.

    2. Hook Things Up

    Lifecycle scripts make great hooks for automation. If you need to run something before building your app, you can use the preinstall script. Need to build assets with grunt, gulp, browserify, or webpack? Do it in the postinstall script.

    In package.json:

    "scripts": {
      "postinstall": "grunt build",
      "start": "node app.js"

    You can also use environment variables to control these scripts:

    "postinstall": "if $BUILD_ASSETS; then npm run build-assets; fi",
    "build-assets": "grunt build"

    If your scripts start getting out of control, move them to files:

    "postinstall": "scripts/"

    3. Modernize Your JavaScript

    With the release of Node 8, the days of maintaining a complicated build system to write our application in ES2015, also known as ES6, are mostly behind us. Node is now 99% feature complete with the ES2015 spec, which means you can use new features such as template literals or destructuring assignment with no ceremony or build process!

    const combinations = [
      { number: "8.0.0", platform: "linux-x64" },
      { number: "8.0.0", platform: "darwin-x64" },
      { number: "7.9.0", platform: "linux-x64" },
      { number: "7.9.0", platform: "darwin-x64" }
    for (let { number, platform } of combinations) {

    There are a ton of additions, and overall they work together to significantly increase the legibility of JavaScript and make your code more expressive.

    4. Keep Your Promises

    Beyond ES2015, Node 8 supports the long-awaited async and await keywords without opting in to experimental features. This feature builds on top of Promises allowing you to write asynchronous code that looks like synchronous code and has the same error handling semantics, making it easier to write, easier to understand, and safer.

    You can re-write nested callback code that looks like this:

    function getPhotos(fn) {
      getUsers((err, users) => {
        if (err) return fn(err);
        getAlbums(users, (err, albums) => {
          if (err) return fn(err);
          getPhotosForAlbums(albums, (err, photos) => {
            if (err) return fn(err);
            fn(null, photos);

    into code that reads top-down instead of inside-out:

    async function getPhotos() {
      const users = await getUsers();
      const albums = await getAlbums(users);
      return getPhotosForAlbums(albums);

    You can call await on any call that returns a Promise. If you have functions that still expect callbacks, Node 8 ships with util.promisify which can automatically turn a function written in the callback style into a function that can be used with await.

    5. Automate Your Code Formatting with Prettier

    We’ve all collectively spent too much time formatting code, adding a space here, aligning a comment there, and we all do it slightly different than our teammate two desks down. This leads to endless debates about where the semicolon goes or whether we should use semicolons at all. Prettier is an open source tool that promises to finally eliminate those pointless arguments for good. You can write your code in any style you like, and with one command it’s all formatted consistently.


    That may sound like a small thing but freeing yourself from arranging whitespace quickly feels liberating. Prettier was only released a few months ago, but it's already been adopted by Babel, React, Khan Academy, Bloomberg, and more!

    If you hate writing semicolons, let Prettier add them for you, or your whole team can banish them forever with the --no-semi option. Prettier supports ES2015 and Flow syntax, and the recent 1.4.0 release added support for CSS and TypeScript as well.

    There are integrations with all major text editors, but we recommend setting it up as a pre-commit hook or with a lifecycle script in package.json.

    "scripts": {
      "prettify": "prettier --write 'src/**/*.js'"

    6. Test Continuously

    Pushing out a new feature and finding out that you've broken the production application is a terrible feeling. You can avoid this mistake if you’re diligent about writing tests for the code you write, but it can take a lot of time to write a good test suite. Besides, that feature needs to be shipped yesterday, and this is only a first version. Why write tests that will only have to be re-written next week?

    Writing unit tests in a framework like Mocha or Jest is one of the best ways of making sure that your JavaScript code is robust and well-designed. However there is a lot of code that may not justify the time investment of an extensive test suite. The testing library Jest has a feature called Snapshot Testing that can help you get insight and visibility into code that would otherwise go untested. Instead of deciding ahead of time what the expected output of a function call should be and writing a test around it, Jest will save the actual output into a local file on the first run, and then compare it to the response on the next run and alert you if it's changed.


    While this won't tell you if your code is working exactly as you'd planned when you wrote it, this does allow you to observe what changes you're actually introducing into your application as you move quickly and develop new features. When the output changes you can quickly update the snapshots with a command, and they will be checked into your git history along with your code.

    it("test /endpoint", async () => {
      const res = await request(``);
      const body = await res.json();
      const { status, headers } = res;
      expect({ status, body, headers }).toMatchSnapshot();

    Example Repo

    Once you've tested your code, setting up a good CI workflow is one way of making sure that it stays tested. To that end, we launched Heroku CI. It’s built into the Heroku continuous delivery workflow, and you'll never wait for a queue. Check it out!

    Don't need the fancy features and just want a super simple test runner? Check out tape for your minimal testing needs.

    7. Wear Your Helmet

    For web application security, a lot of the important yet easy configuration to lock down a given app can be done by returning the right HTTP headers.

    You won't get most of these headers with a default Express application, so if you want to put an application in production with Express, you can go pretty far by using Helmet. Helmet is an Express middleware module for securing your app mainly via HTTP headers.

    Helmet helps you prevent cross-site scripting attacks, protect against click-jacking, and more! It takes just a few lines to add basic security to an existing express application:

    const express = require('express');
    const helmet = require('helmet');
    const app = express();

    Read more about Helmet and other Express security best practices

    8. HTTPS all the things

    By using private connections by default, we make it the norm, and everyone is safer. As web engineers, there is no reason we shouldn’t default all traffic in our applications to using HTTPS.

    In an express application, there are several things you need to do to make sure you're serving your site over https. First, make sure the Strict-Transport-Security header (often abbreviated as HSTS) is set on the response. This instructs the browser to always send requests over https. If you’re using Helmet, then this is already done for you!

    Then make sure that you're redirecting any http requests that do make it to the server to the same url over https. The express-enforce-ssl middleware provides an easy way to do this.

    const express = require('express');
    const expressEnforcesSSL = require('express-enforces-ssl');
    const app = express();
    app.enable('trust proxy');

    Additionally you'll need a TLS certificate from a Certificate Authority. But if you are deploying your application to Heroku and using any hobby or professional dyno, you will automatically get TLS certificates set up through Let’s Encrypt for your custom domains by our Automated Certificate Management – and for applications without a custom domain, we provide a wildcard certificate for *

    What are your habits?

    I try to follow these habits in all of my projects. Whether you’re new to node or a server-side JS veteran, I’m sure you’ve developed tricks of your own. We’d love to hear them! Share your habits by tweeting with the #node_habits hashtag.

    Happy hacking!

  • Announcing Release Phase: Automatically Run Tasks Before a New Release is Deployed (Heroku)
    08 Jun 2017 15:37

    You’re using a continuous delivery pipeline because it takes the manual steps out of code deployment. But when a release includes updates to a database schema, the deployment requires manual intervention and team coordination. Typically, someone on the team will log into the database and run the migration, then quickly deploy the new code to production. It's a process rife with deployment risk.

    Now with Release Phase, generally available today, you can define tasks you need to run before a release is deployed to production. Simply push your code and Release Phase will automatically run your database schema migration, upload static assets to a CDN, or any other task your app needs to be ready for production. If a Release Phase task fails, the new release is not deployed, leaving the production release unaffected.

    To get started, view the release phase documentation.


    A Release Phase Example

    Let’s say you have a Node.js app, using Sequelize as your ORM, and want to run a database migration on your next release. Simply define a release command in your Procfile:

    release: node_modules/.bin/sequelize db:migrate
    web: node ./bin/www

    When you run git push heroku master, after the build is successful, Release Phase begins the migration via a one-off dyno. If the migration is successful, the app code is deployed to production. If the migration fails, your release is not deployed and you can check your Release Phase logs to debug.

    $ git push heroku master
    Running release command….
    --- Migrating Db ---
    Sequelize [Node: 7.9.0, CLI: 2.7.9, ORM: 3.30.4]
    Loaded configuration file "config/config.json".
    Using environment "production".
    == 20170413204504-create-post: migrating ======
    == 20170413204504-create-post: migrated (0.054s)
    V23 successfully deployed 

    Check out the video to watch it in action:

    Heroku Flow + Release Phase

    Heroku Flow provides you with a professional continuous delivery pipeline with dev, staging, and production environments. When you promote a release from staging to production, Release Phase will automatically run your tasks in the production environment.

    Screen Shot 2017-05-09 at 10

    With Heroku Flow you always knows where a particular feature is on the path to production. Now -- with Release Phase -- the path to production has even fewer manual steps.

  • Introducing Heroku Shield: Continuous Delivery for High Compliance Apps (Heroku)
    06 Jun 2017 12:45

    Today we are happy to announce Heroku Shield, a new addition to our Heroku Enterprise line of products. Heroku Shield introduces new capabilities to Dynos, Postgres databases and Private Spaces that make Heroku suitable for high compliance environments such as healthcare apps regulated by the Health Insurance Portability and Accountability Act (HIPAA). With Heroku Shield, the power and productivity of Heroku is now easily available to a whole new class of strictly regulated apps.

    At the core of Heroku’s products is the idea that developers can turn great ideas into successful customer experiences at a surprising pace when all unnecessary and irrelevant elements of application infrastructure are systematically abstracted away. The design of Heroku Shield started with the question: what if regulatory and compliance complexity could be transformed into a simple developer experience, just as has been done for infrastructure complexity? The outcome is a simple, elegant user experience that abstracts away compliance complexity while freeing development teams to use the tools and services they love in a new class of app.

    Heroku Shield is generally available to Heroku Enterprise customers. For more information about Heroku Enterprise, please contact us here.

    How it Works


    Shield Private Spaces

    To use Heroku Shield, start by creating a new Private Space and switch on the Shield option. The first thing you notice is that logging is now configured at the space level. With Private Space Logging, logs from all apps and control systems are automatically forwarded to the logging destination configured for the space. This greatly simplifies compliance auditing while still leaving the developers in full control of app configuration and deployment.

    Shield Private Spaces also adds a critical compliance feature to the heroku run command used by developers to access production apps for administrative and diagnostic tasks. In a Shield Private Space, all keystrokes typed in an interactive heroku run session are logged automatically. This meets a critical compliance requirement to audit all production access but without restricting developers from doing diagnostics and time sensitive remediation tasks directly on production environments.

    Shield Private Dynos and Postgres

    In a Shield Private Space you can create special Shield flavors of Dynos and Postgres databases. The Shield Private Dyno includes an encrypted ephemeral file system and restricts SSL termination from using TLS 1.0 which is considered vulnerable. Shield Private Postgres further guarantees that data is always encrypted in transit and at rest. Heroku also captures a high volume of security monitoring events for Shield dynos and databases which helps meet regulatory requirements without imposing any extra burden on developers.

    App Innovation for Healthcare and Beyond

    With Heroku Shield, you can now build healthcare apps on Heroku that are capable of handling protected health information (PHI) in compliance with the United States HIPAA framework. The healthcare industry is living proof of how challenging it is to modernize application delivery while meeting strict compliance requirements. All you have to do is compare the user experience of most healthcare apps with what you have come to expect from apps in less regulated industries like e-commerce, productivity and social networks.

    It's simply too hard to evolve and modernize healthcare apps today because they are delivered using outdated, rigid platforms and practices. At Heroku, we are doing our small part to change this by providing development teams a HIPAA-ready platform with the industry's best Continuous Delivery Experience.

    Of course, this is just a step on our trust journey - the work of providing more security and compliance capabilities is never complete. We are already working on new capabilities and certifications for Heroku Shield, and as always look to our customers and the developer community for input on how to direct and prioritize those efforts.


    The opportunity to combine developer creativity with the opportunities for innovation in high compliance industries is powerful and potent. Heroku has had the privilege to see the possibilities that result from removing obstacles from developers, and with Shield, hope to see that promise amplified yet again. For more information on Shield, see the Dev Center article here, or contact Heroku.

  • Announcing DNS Service Discovery for Heroku Private Spaces: Microservices Communication, Made Easy (Heroku)
    31 May 2017 15:38

    Today, we are excited to announce DNS Service Discovery for Heroku Private Spaces, an easy way to find and coordinate services for microservice-style deployments.

    As applications grow in sophistication and scale, developers often organize their applications into small, purpose-built “microservices”. These microservice systems act in unison to achieve what otherwise would be handled by a single, larger monolithic application, which serves the benefit of simplifying applications’ codebases and improving their overall reliability.

    DNS Service Discovery is a valuable component of a true microservices architecture. It is a simple, yet effective way to facilitate microservice-style application architecture on Private Spaces using standard DNS naming conventions. As a result, your applications can now know in advance how they should reach the other process types and services needed to do their job.

    How It Works


    DNS Service Discovery allows you to connect these services together by providing a naming scheme for finding individual dynos within your Private Space. Every process type for every application in the Space is configured to respond to a standard DNS name of the format <process-type>.<application-name>.app.localspace.


    $ nslookup 0 IN A 0 IN A 0 IN A

    This is enabled by default on all newly created applications in Private Spaces. For existing Private Spaces applications, you need to run:

    $ heroku features:enable spaces-dns-discovery --app <app name>

    When combined with Heroku Flow’s continuous delivery approach, the benefits of a microservices architecture are further realized. For example, in a distributed system, each application can have a smaller footprint and a more focused purpose - so when it comes time to push updates to this system, your team can modify and continuously deliver a single portion of your architecture, instead of having to cycle out the entirety of your application. And when your application’s traffic grows, you can scale up the just the portion of your system that requires extra cycles, resulting in a more flexible and economical use of resources.

    Learn More

    We’re excited to see the new possibilities Service Discovery opens up for microservices architectures. If you are interested in learning more about DNS Service Discovery for your applications in Private Spaces, please check out our Dev Center article or contact us with further questions.

  • Announcing Platform API for Partners (Heroku)
    25 May 2017 15:34

    Heroku has always made it easy for you to extend your apps with add-ons. Starting today, partners can access the Platform API to build a more secure and cohesive developer experience between add-ons and Heroku.

    Advancing the Add-on User Experience

    Several add-ons are already using the new Platform API for Partners. Adept Scale, a long-time add-on in our marketplace that provides automated scaling of Heroku dynos, has updated its integration to offer a stronger security stance, with properly scoped access to each app it is added to. Existing customer integrations have been updated as of Friday May 12th. All new installs of Adept Scale will use the more secure, scoped Platform API.

    Opbeat, a performance monitoring service for Node.js developers, is using the Platform API in production to sync their user roles to match Heroku. It is also synchronizing metadata, so that its data stays in sync with Heroku when users make changes, for instance renaming a Heroku app. This connection enables a more cohesive experience between the two tools.

    We have a list of standard endpoints that partners can use documented in the Dev Center, with more functionality coming soon. For new integrations that may require additional endpoints, we ask partners to reach out to us directly about making specific endpoints from the Platform API available. Please contact us with information about your intended integration.

    As add-on partner adoption of the Platform API grows, Heroku customers can expect to see a more cohesive, reliable and secure developer experience when using add-ons, and a wider range of add-on offerings in our Elements marketplace.

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>