neetpiq

My two cents about software development on the web


Cloud Platforms


Warning: file_put_contents(C:\hostingspaces\neetpiqc\neetpiq.com\wwwroot/wp-content/cache/0bac12a71080e2f0238f1b173488ab6f.spc) [function.file-put-contents]: failed to open stream: Permission denied in C:\hostingspaces\neetpiqc\neetpiq.com\wwwroot\wp-includes\class-simplepie.php on line 8680

Warning: C:\hostingspaces\neetpiqc\neetpiq.com\wwwroot/wp-content/cache is not writeable. Make sure you've set the correct relative or absolute path, and that the location is server-writable. in C:\hostingspaces\neetpiqc\neetpiq.com\wwwroot\wp-includes\class-simplepie.php on line 1781

Warning: file_put_contents(C:\hostingspaces\neetpiqc\neetpiq.com\wwwroot/wp-content/cache/ad31bb8bf02bbf58903d4663ca4ee574.spc) [function.file-put-contents]: failed to open stream: Permission denied in C:\hostingspaces\neetpiqc\neetpiq.com\wwwroot\wp-includes\class-simplepie.php on line 8680

Warning: C:\hostingspaces\neetpiqc\neetpiq.com\wwwroot/wp-content/cache is not writeable. Make sure you've set the correct relative or absolute path, and that the location is server-writable. in C:\hostingspaces\neetpiqc\neetpiq.com\wwwroot\wp-includes\class-simplepie.php on line 1781
  • Windows Server 2012 R2, IIS 8.5, WebSockets and .NET 4.5.2 (AppHarbor Blog)

    During the last couple of weeks we've upgraded worker servers in the US and EU regions to support Windows Server 2012 R2, IIS 8.5 and .NET 4.5.2. Major upgrades like this can be a risky and lead to compatibility issues, and the upgrade was carefully planned and executed to maximize compatibility with running applications. Application performance and error rates have been closely monitored throughout the process and fortunately, chances are you haven't noticed a thing: We've detected migration-related issues with less than 0.1% of running applications.

    Many of the new features and configuration improvements enabled by this upgrade will be gradually introduced over the coming months. This way we can ensure a continued painless migration and maintain compatibility with the previous Windows Server 2008 R2/IIS 7.5 setup, while we iron out any unexpected kinks if and when they crop up. A few changes have however already been deployed that we wanted to fill you in on.

    WebSocket support and the beta region

    Last year the beta region featuring experimental WS2012 and WebSockets support was introduced. The beta region allowed customers to test existing and new apps on the new setup while we prepared and optimized it for production use. This approach has been an important factor in learning about subtle differences between the server versions, and addressing pretty much all compatibility issues before upgrading the production regions. Thanks to all the customers who provided valuable feedback during the beta and helped ensure a smoother transition for everyone.

    An important reason for the server upgrade was to support WebSocket connections. Now that the worker servers are running WS2012 and IIS 8.5 we've started doing just that. Applications in the old beta region have been merged into the production US region and the beta region is no longer available when you create a new application.

    Most load balancers already support WebSockets and the upgrade is currently being rolled out to remaining load balancers. Apps created since August 14th fully support WebSockets and no configuration is necessary: AppHarbor will simply detect and proxy connections as expected when a client requests a Connection: Upgrade.

    Some libraries, such as SignalR, will automatically detect and prefer WebSocket connections when supported by both the server and client. Until WebSocket connections are supported on all load balancers some apps may attempt and fail during the WebSocket handshake. This should not cause issues since these libraries will fall back to other supported transports, and affected apps will automatically be WebSocket-enabled when supported by the load balancers.

    CPU throttling

    One of the major challenges that has held back this upgrade is a change in the way we throttle worker CPU usage. CPU limitations are the same as before, but the change can affect how certain CPU-intensive tasks are executed. Resources and documentation on this subject are limited, but testing shows that CPU time is more evenly scheduled across threads, leading to higher concurrency, consistency and stability within processes. While this is overall an improvement it can also affect peak performance on individual threads, and we're currently investigating various approaches to better support workloads affected by this.

    For the curious, we previously used a CPU rate limit registry setting to limit CPU usage per user account, but this is no longer supported on Windows Server 2012. We now use a combination of IIS 8's built-in CPU throttling and a new CPU rate control for job objects to throttle background workers.

    If you've experienced any issues with this upgrade or have feedback about the process, please don't hesitate to reach out.

  • Heartbleed Security Update (AppHarbor Blog)

    Updated on April 10, 2014 with further precautionary steps in the "What can you do" section below.

    On April 7, 2014, a serious vulnerability in the OpenSSL library (CVE-2014-0160) was publicly disclosed. OpenSSL is a cryptography library used for the majority of private communications across the internet.

    The vulnerability, nicknamed "Heartbleed", would allow an attacker to steal secret certificates keys, names and passwords of users and other secrets encrypted using the OpenSSL library. As such it represents a major risk for a large number of internet application and services, including AppHarbor.

    What has AppHarbor done about this

    AppHarbor responded to the announcement by immediately taking steps to remediate the vulnerability:

    1. We updated all affected components with the updated, secure version of OpenSSL within the first few hours of the announcement. This included SSL endpoints and load balancers, as well as other infrastructure components used internally at AppHarbor.
    2. We re-keyed and redeployed all potentially affected AppHarbor SSL certificates (including the piggyback *.apphb.com certificate), and the old certificates are being revoked.
    3. We notified customers with custom SSL certificates last night, so they could take steps to re-key and reissue certificates, and have the old ones revoked.
    4. We reset internal credentials and passwords.
    5. User session cookies were revoked, requiring all users to sign in again.

    Furthermore, AppHarbor validates session cookies against your previously known IP addresses as part of the authorization process. This has reduced the risk of a stolen session cookie being abused. Perfect forward secrecy was deployed to some load balancers, making it impossible to read intercepted and encrypted communication with stolen keys. Forward secrecy has since been deployed to all load balancers hosted by AppHarbor.

    What can you do

    We have found no indication that the vulnerability was used to attack AppHarbor. By quickly responding to the issue and taking the steps mentioned above we effectively stopped any further risk of exposure. However, due to the nature of this bug, we recommend users who want to be extra cautious to take the following steps:

    1. Reset your AppHarbor password.
    2. Review the sign-in and activity history on your user page for any suspicious activity.
    3. Revoke authorizations for external applications that integrates with AppHarbor.
    4. Recreate, reissue and reinstall custom SSL certificates you may have installed, and revoke the old ones. Doing this may revoke the old certificates, so make sure you're ready to install the new certificates.
    5. Read the details about the Heartbleed bug here and assess the risks relative to your content.

    Updated instructions (April 10, 2014):

    While we still have not seen any abuse on AppHarbor as a result of this bug, we now also encourage you to take these precautionary steps:

    1. Reset your build URL token.
    2. If you're using one of the SQL Server or MySQL add-ons: Reset the database password. Go to the add-on's admin page and click the "Reset Password" button. This will immediately update the configuration on AppHarbor and redeploy the application (with a short period of downtime until it is redeployed).
    3. If you're using the Memcacher add-on: Reinstall the add-on by uninstalling and installing it.
    4. Rotate/update sensitive information in your own configuration variables.

    If you have hardcoded passwords/connection strings for any your add-ons this is a good opportunity to start using the injected configuration variables. You can find instructions for the SQL add-ons here and the Memcacher add-on here. This way your application is automatically updated when you reset the add-ons, or when an add-on provider updates the configuration. If this is not an option you should immediately update your code/configuration files and redeploy the application after the configuration is updated.

    Stay tuned

    Protecting your code and data is our top priority, and we continue to remediate and asses the risks in response to this issue. We'll keep you posted with any new developments, so stay tuned on Twitter and the blog for important updates. We're of course also standing by on the support forums if you have any questions or concerns.

  • Librato integration and built-in perfomance metrics (AppHarbor Blog)

    Librato Dashboard

    Being able to monitor and analyze key application metrics is an essential part of developing stable, performant and high quality web services that meets you business requirements. Today we’re announcing a great new set of features to provide a turnkey solution for visualizing, analyzing and acting on key performance metrics. On top of that we’re enabling you to easily track your own operational metrics. In this blog post we’ll look at how the pieces tie together.

    Librato integration

    The best part of today’s release is our new integration with Librato for monitoring and analyzing metrics. Librato is an awesome and incredibly useful service that enables you to easily visualize and correlate metrics, including the new log-based performance metrics provided by AppHarbor (described in more details below).

    Librato Dashboard

    Librato is now available as an add-on and integrates seamlessly with your AppHarbor logs. When you provision the add-on, Librato will setup a preconfigured dashboard tailored for displaying AppHarbor performance data and you can access it immediately by going to the Librato admin page. Everything will work out of the box without any further configuration and your logs will automatically be sent to Librato using a log drain.

    When log messages containing metric data are sent to Librato they’re transformed by an l2met service before being sent to their regular API. A very cool feature of the l2met service is that it can automatically calculate some useful metrics. For instance, it’ll calculate the median response time as well as the the 99th and 95th percentile of measurements such as response times. The perc99 response time means the response time of the 99% fastest responses. It can be useful to know this value since it's less affected by a few very slow responses than the average. Among other things this provides a good measurement of the browsing experience for most of your users.

    Librato Dashboard

    The l2met project was started by Ryan Smith - a big shout-out and thanks to him and the Librato team for developing this great tool.

    For more information about how to integrate with Librato and details about the service please refer to the documentation here. Also check out their announcement blog post about the integration.

    Built-in performance metrics

    AppHarbor can now write key runtime performance metrics directly to your application’s log stream as l2met 2.0 formatted messages similar to this:

    source=web.5 sample#memory.private_bytes=701091840
    source=web.5 sample#process.handles=2597
    source=web.5 sample#cpu.load_average=1.97
    

    These are the messages Librato uses as well and most of them are written every 20 seconds. They allow for real-time monitoring of worker-specific runtime metrics such as CPU (load average) and memory usage, as well as measurements of response time and size reported from the load balancers. Because these metrics are logged to your log stream you can also consume them in the same way you’d usually view or integrate with your logs.

    Load average run-time metrics

    Performance data collection takes place completely out-of-process, without using a profiler, and it can be enabled and disabled without redeploying the application. This means that monitoring won’t impact application performance at all and that a profiler (such as New Relic) can still be attached to the application.

    Writing custom metrics

    The performance data provided by AppHarbor is probably not the only metrics you want to track. You can of course integrate directly with Librato’s API, but the l2met integration makes it easier than ever to track your own metrics, and the paid Librato plans includes the ability to track custom metrics exactly for that purpose.

    You can start writing your own metrics simply by sending an l2met-formatted string to your logs. Last week we introduced the Trace Logging feature which is perfect for this, so writing your custom metrics can now be done with a simple trace:

    Trace.TraceInformation(“measure#twitter.lookup.time=433”);
    

    To make this even easier we’ve built the metric-reporter library (a .NET port of Librato’s log-reporter) to provide an easy to use interface for writing metrics to your log stream. You can install it with NuGet:

    Install-Package MetricReporter
    

    Then initialize a MetricReporter which writes to a text writer:

    var writer = new L2MetWriter(new TraceTextWriter);
    var reporter = new MetricReporter(metricWriter);
    

    And start tracking your own custom metrics:

    reporter.Increment("jobs.completed");
    reporter.Measure("payload.size", 21276);
    reporter.Measure("twitter.lookup.time", () =>
    {
        //Do work
        twitterRequest.GetResponse();
    });
    

    On Librato you can then view charts with these new metrics along with the performance metrics provided by AppHarbor, and add them to your dashboards, aggregate and correlate data, set up alerts etc. The MetricReporter library will take care of writing l2met-formatted metrics using the appropriate metric types and write to the trace or another IO stream. Make sure to inspect the README for more examples and information on configuration and usage.

    That’s all we have for today. There’ll be more examples on how you can use these new features soon, but for now we encourage you to take it for a spin, install the Librato add-on and test the waters for yourself. We’d love to hear what you think so if there are other metrics you’d like to see or if you experience any issues please hit us up through the usual channels.

  • Introducing Trace Logging (AppHarbor Blog)

    Today we’re happy to introduce trace message integration with your application log. With tracing you can very easily log trace messages to your application's log stream by using the built-in tracing capabilities of the .NET framework from anywhere in your application.

    When introducing the realtime logging module a while back we opened up access to collated log data from load balancers, the build and deploy infrastructure, background workers and more. Notably missing however was the ability to log from web workers. We’re closing that gap with tracing, which can be used in both background and web workers.

    How to use it

    The trace feature integrates with standard .NET tracing, so you don’t have to make any changes to your application to use it. You can simply log traces from your workers with the System.Diagnostics.Trace class:

    Trace.TraceInformation("Hello world");
    

    This will yield a log message containing a timestamp and the source of the trace in your application’s log like so:

    2014-01-22T06:46:48.086+00:00 app web.1 Hello World
    

    You can also use a TraceSource by specifying the trace source name AppHarborTraceSource:

    var traceSource = new TraceSource("AppHarborTraceSource", defaultLevel: SourceLevels.All);
    traceSource.TraceEvent(TraceEventType.Critical, 0, "Foo");
    

    You may not always want noisy trace messages in your logs and you can configure the trace level on the "Logging" page. There are 4 levels: All, Warning, Error and None. Setting the trace level will update the configuration without redeploying or restarting the application. This is often desirable if you need to turn on tracing when debugging and diagnosing an ongoing or state-related issue.

    Configure Trace level

    There are a number of other ways to use the new tracing feature including:

    • ASP.NET health monitoring (for logging exceptions, application lifecycle events etc).
    • A logging library such as NLog (Trace target) or log4net (TraceAppender).
    • Integrating with ETW (Event Tracing for Windows) directly using the injected event provider id.

    Anything that integrates with .NET tracing or ETW should work, and you can find more details and examples in this knowledge base article.

    All new applications have tracing enabled by default. Tracing can be enabled for existing applications on the "Logging" page.

    How does it work

    Under the hood we’re using ETW for delivering log messages to the components that are responsible for sending traces to your log stream. Application performance is unaffected by the delivery of log messages as this takes place completely out of process. Note however that messages are buffered for about a second and that some messages may be dropped if you’re writing excessively to the trace output.

    When tracing is enabled, AppHarbor configures your application with an EventProviderTraceListener as a default trace listener. While you can integrate directly with ETW as well we recommend using the Trace or TraceSource approaches described above.

    Viewing trace messages

    Traces are collated with other logging sources in your log stream, so you can consume them in the same way you’re used to. You can view log messages using the command line interface, the web viewer or set up a log drain to any HTTP, HTTPS or syslog endpoint. For more information about the various integration points please refer to this article.

    Viewing trace messages in console

    We’ve got a couple of cool features that builds on this ready soon, so stay tuned and happy tracing!

  • .NET 4.5.1 is ready (AppHarbor Blog)

    Microsoft released .NET 4.5.1 a while back, bringing a bunch of performance improvements and new features to the framework. Check out the announcement for the details.

    Over the past few weeks we have updated our build infrastructure and application servers to support this release. We're happy to report that AppHarbor now supports building, testing and running applications targeting the .NET 4.5.1 framework, as well as solutions created with Visual Studio 2013 and ASP.NET MVC 5 applications.

    There are no known issues related to this release. If you encounter problems, please refer to the usual support channels and we'll help you out.

    .NET logo

  • Integrated NuGet Package Restore (AppHarbor Blog)

    A few months ago the NuGet team released NuGet 2.7, which introduced a new approach to package restore. We recently updated the AppHarbor build process to adopt this approach and integrate the new NuGet restore command. AppHarbor will now automatically invoke package restore before building your solution.

    Automatically restoring packages is a recommended practice, especially because you don’t have to commit the packages to your repository and can keep the footprint small. Until now we’ve recommended using the approach desribed in this blog post to restore NuGet packages when building your application. This has worked relatively well, but it’s also a bit of a hack and has a few caveats:

    • Some NuGet packages rely files that needs to be present and imported when MSBuild is invoked. This has most notably been an issue for applications relying on the Microsoft.Bcl.Build package for the reasons outlined in this article.
    • NuGet.exe has to be committed and maintained with the repository and project and solution files needs to be configured.
    • Package restore can intermittently fail in some cases when multiple projects are built concurrently.

    With this release we expect to eliminate these issues and provide a more stable, efficient and streamlined way of handling package restore.

    If necessary, NuGet can be configured by adding a NuGet.config file in the same directory as your solution file (or alternatively in a .nuget folder under your solution directory). You usually don't have to configure anything if you’re only using the official NuGet feed, but you’ll need to configure your application if it relies on other package sources. You can find an example configuration file which adds a private package source in the knowledge base article about package restore and further documentation for NuGet configuration files can be found here.

    If you hit any snags we’re always happy to help on our support forums.

    NuGet logo

  • New Relic Improves Service and Reduces Price (AppHarbor Blog)

    New Relic

    We're happy to announce that New Relic has dropped the price of the Professional add-on plan from $45/month to $19/month per worker unit. Over the years New Relic has proven to be a really useful tool for many of our customers, and we're pleased that this price drop will make the features of New Relic Professional more accessible to everyone using AppHarbor.

    Highlights of the Professional plan include:

    • Unlimited data retention
    • Real User Monitoring (RUM) and browser transaction tracing
    • Application transaction tracing, including Key Transactions and Cross Application Tracing
    • Advanced SQL and slow SQL analysis

    You can find more information about the benefits of New Relic Pro on the New Relic website (http://newrelic.com/pricing/details).

    Service update

    The New Relic agent was recently upgraded to a newer version which brings support for some recently introduced features as well as a bunch of bug fixes. Time spent in the request queue is now reported and exposed directly in the New Relic interface. Requests are rarely queued for longer than a few milliseconds, but it can happen if your workers are under load. When more time is spent in the request queue it may be an indicator that you need to scale your application to handle the load efficiently.

    We're also making a few changes to the way the New Relic profiler is initialized with your applications. This is particularly relevant if you've subscribed to New Relic directly rather than installing the add-on through AppHarbor. Going forward you'll need to add a NewRelic.LicenseKey configuration variable to make sure the profiler is attached to your application. We recommend that you make this change as soon as possible. If you're subscribed to the add-on through AppHarbor no action is required and the service will continue to work as usual.

  • Found Elasticsearch add-on available (AppHarbor Blog)

    Found ElasticSearch

    Found provides fully hosted and managed Elasticsearch clusters; each cluster has reserved memory and storage ensuring predictable performance. The HTTPS API is developer-friendly and existing Elasticsearch libraries such as NEST, Tire, PyES and others work out of the box. The Elasticsearch API is unmodified, so for those with an existing Elasticsearch integration it is easy to get started.

    For production and mission critical environments customers can opt for replication and automatic failover to a secondary site, protecting the cluster against unplanned downtime. Security has a strong focus: communication to and from the service is securely transmitted over HTTPS (SSL) and data is stored behind multiple firewalls and proxies. Clusters run in isolated containers (LXC) and customisable ACLs allow for restricting access to trusted people and hosts.

    In the event of a datacenter failure, search clusters are automatically failed over to a working datacenter or, in case of a catastrophic event, completely rebuilt from backup.

    Co-founder Alex Brasetvik says: "Found provides a solution for companies who are keen to use Elasticsearch but not overly keen to spend their time and money on herding servers! We provide our customers with complete cluster control: they can scale their clusters up or down at any time, according to their immediate needs. It's effortless and there's zero downtime."

    More information and price plans are available on the add-on page.

  • Introducing Realtime Logging (AppHarbor Blog)

    Today we're incredibly excited to announce the public beta of our brand new logging module. Starting immediately all new applications created on AppHarbor will have logging enabled. You can enable it for your existing apps on the new "Logging" page.

    We know all too well that running applications on a PaaS like AppHarbor sometimes can feel like a black box. So far we haven't had a unified, simple and efficient way to collate, present and distribute log events from the platform and your apps.

    That's exactly what we wanted to address with our logging solution, and based on the amazing feedback from private beta users we feel confident that you'll find it useful for getting insight about your application and AppHarbor. A big thanks to all the beta testers who have helped us refine and test these new features.

    The new logging module collates log messages from multiple sources, including almost all AppHarbor infrastructure component and your applications - API changes, load balancer request logs, build, deploy and stdout/stderr from your background workers and more can now be accessed and sent to external services in real time.

    Captain's log Consider yourself lucky we're not that much into skeuomorphism

    Interfaces

    We're providing two interfaces "out of the box" - a convenient web-interface can be accessed on the Logging page and a new log command has been added to the CLI. [Get the installer directly from here or install with Chocolatey cinst appharborcli.install. To start a "tailing" log session with the CLI, you can for instance run appharbor log -t -s appharbor. Type appharbor log -h to see all options. Log web interface

    The web interface works a bit differently, but try it out and let us know what you think - it's heavily inspired by the log.io project who have built a great client side interface for viewing, filtering, searching and splitting logs into multiple "screens".

    Log web interface

    Integration

    One of the most useful and interesting aspects of today's release is the flexible integration points it provides. Providing access to your logs in realtime is one thing, but AppHarbor will only store the last 1500 log messages for your application. Storing, searching, viewing and indexing logs can be fairly complex and luckily many services already exists that helps you make more sense of your log data.

    We've worked with Logentries to provide a completely automated and convenient way for sending AppHarbor logs to them when you add their add-on. When you add the Logentries add-on your application can automatically be configured to send logs to Logentries, and Logentries will be configured to display log messages in AppHarbor's format.

    Logentries integration

    You can also configure any syslog (TCP), HTTP and HTTPS endpoint you like with log "drains". You can use this to integrate with services like Loggly and Splunk, or even your own syslog server or HTTP service. More details about log drains are available in the this knowledge base article and the drain API documentation.

    Finally there's a new new Log session API endpoint that you can use to create sessions similar to the ones used by the interfaces we provide.

    Logplex

    If you've ever used Heroku you'll find most of these features very familiar. That's no coincidence - the backend is based on Heroku's awesome distributed syslog router, Logplex. Integrating with Logplex makes it a lot easier for add-on providers who already support Heroku's Logplex to integrate with AppHarbor, while giving us a scalable and proven logging backend to support thousands of deployed apps.

    Logplex is also in rapid, active development, and a big shout-out to the awesome people at Heroku who are building this incredibly elegant solution. If you're interested in learning more about Logplex we encourage you to check out the project on Github and try it for yourself. We've built a client library for interacting with Logplex's HTTP API and HTTP log endpoints from .NET apps - let us know if you'd like to use this and we'll be happy to open source the code. The Logplex documentation on stream management is also useful for a high-level overview of how Logplex works.

    Next steps

    With this release we've greatly improved the logging experience for our customers. We're releasing this public beta since we know it'll be useful to many of you as it is, but we're by no means finished. We want to add even more log sources, provide more information from the various infrastructure components and integrate with more add-on providers. Also note that request logs are currently only available on shared load balancers, but it will be rolled out to all load balancers soon. If you find yourself wanting some log data that is not currently available please let us know. We now have a solid foundation to provide you with the information you need when you need it, and we couldn't be more excited about that.

    We'll provide you with some examples and more documentation for these new features over the next couple of weeks, but for now we hope you'll take it for a spin and test the waters for yourself. Have fun!

  • Introducing PageSpeed optimizations (AppHarbor Blog)

    Today we've introducing a new experimental feature: Google PageSpeed optimizations support. The PageSpeed module is a suite of tools that tries to optimize web page latency and bandwidth usage of your websites by rewriting your content to implement web performance best practices. Reducing the number of requests to a single domain, optimizing cache policies and compressing content can significantly improve web performance and lead to a better user experience.

    With PageSpeed optimization filters we're making it easier to apply some of these best practices, and provide a solution that efficiently and effortlessly speed up your web apps. The optimizations takes place at the load balancer level and works for all web applications no matter what framework or language you use.

    As an example of how this works you can inspect the HTML and resources of this blog to see some of the optimizations that are applied. Analyzing blog.appharbor.com with the online PageSpeed insights tool yields a "PageSpeed score" of 88 when enabled versus 73 when disabled. Not too bad considering it only took a click to enable it.

    PageSpeed button

    You can enable PageSpeed optimizations for your web application on the new "Labs" page, which can be found in the application navigation bar. The application will be configured with PageSpeed's core set of filters within a few seconds. We will then, among other things, apply these filters to your content:

    When you've enabled PageSpeed we recommend that you test the application to make sure it doesn't break anything. You can also inspect the returned content in your browser and if you hit any snags simply disable PageSpeed and let support know about it. Note that only content transferred over HTTP from your domain will be processed by PageSpeed filters. To optimize HTTPS traffic you can enable SPDY support (although that is currently only enabled on dedicated load balancers and in the beta region).

    We'll make more filters available later on, but for the beta we're starting out with a curated set of core filters, which are considered safe for most web applications. There are a few other cool filters we'll add support for later on - such as automatic sprite image generation and lazy-loading of images. Let us know if there are any filters in the catalog you think we should support!

  • On the Rise of Kotlin (Heroku)
    20 Jun 2017 15:27

    It’s rare when a highly structured language with fairly strict syntax sparks emotions of joy and delight. But Kotlin, which is statically typed and compiled like other less friendly languages, delivers a developer experience that thousands of mobile and web programmers are falling in love with.

    The designers of Kotlin, who have years of experience with developer tooling (IntelliJ and other IDEs), created a language with very specific developer-oriented requirements. They wanted a modern syntax, fast compile times, and advanced concurrency constructs while taking advantage of the robust performance and reliability of the JVM. The result, Kotlin 1.0, was released in February 2016 and its trajectory since then has been remarkable. Google recently announced official support for Kotlin on Android, and many server-side technologies have introduced Kotlin as a feature.

    The Spring community announced support for Kotlin in Spring Framework 5.0 last month and the Vert.x web server has worked with Kotlin for over a year. Kotlin integrates with most existing web applications and frameworks out-of-the-box because it's fully interoperable with Java, making it easy to use your favorite libraries and tools.

    But ultimately, Kotlin is winning developers over because it’s a great language. Let’s take a look at why it makes us so happy.

    A Quick Look at Kotlin

    The first thing you’ll notice about Kotlin is how streamlined it is compared to Java. Its syntax borrows from languages like Groovy and Scala, which reduce boilerplate by making semicolons optional as statement terminators, simplifying for loops, and adding support for string templating among other things. A simple example in Kotlin is adding two numbers inside of a string like this:

    val sum: String = "sum of $a and $b is ${a + b}"
    

    The val keyword is a feature borrowed from Scala. It defines an immutable variable, which in this case is explicitly typed as a String. But Kotlin can also infer that type. For example, you could write:

    val x = 5
    

    In this case, the type Int is inferred by the compiler. That’s not to say the type is dynamic though. Kotlin is statically typed, but it uses type inference to reduce boilerplate.

    Like many of the JVM languages it borrows from, Kotlin makes it easier to use functions and lambdas. For example, you can filter a list by passing it an anonymous function as a predicate:

    val positives = list.filter { it > 0 }
    

    The it variable in the function body references the first argument to the function by convention. This is borrowed from Groovy, and eliminates the boilerplate of defining parameters.

    You can also define named functions with the fun keyword. The following example creates a function with default arguments, another great Kotlin feature that cleans up your code:

    fun printName(name: String = "John Doe") {
      println(name);
    }
    

    But Kotlin does more than borrow from other languages. It introduces new capabilities that other JVM languages lack. Most notable are null safety and coroutines.

    Null safety means that a Kotlin variable cannot be set to null unless it is explicitly defined as a nullable variable. For example, the following code would generate a compiler error:

    val message: String = null
    

    But if you add a ? to the type, it becomes nullable. Thus, the following code is valid to the compiler:

    val message: String? = null
    

    Null safety is a small but powerful feature that prevents numerous runtime errors in your applications.

    Coroutines, on the other hand, are more than just syntactic sugar. Coroutines are chunks of code that can be suspended to prevent blocking a thread of execution, which greatly simplifies asynchronous programming.

    For example, the following program starts 100,000 coroutines using the launch function. The body of the coroutine can be paused at a suspension point so the main thread of execution can perform some other work while it waits:

    fun main(args: Array<String>) = runBlocking<Unit> {
      var number = 0
      val random = Random()
      val jobs = List(100_000) {
        launch(CommonPool) {
          delay(10)
          number += random.nextInt(100)
        }
      }
      jobs.forEach { it.join() }
      println("The answer is: $number")
    }
    

    The suspension point is the delay call. Otherwise, the function simply calculates some random number and renders it.

    Coroutines are still an experimental feature in Kotlin 1.1, but early adopters can use them in their applications today.

    Despite all of these great examples, the most important feature of Kotlin is its ability to integrate seamlessly with Java. You can mix Kotlin code into an application that’s already based on Java, and you can consume Java APIs from Kotlin with ease, which smooths the transition and provides a solid foundation.

    Kotlin Sits on the Shoulders of Giants

    Behind every successful technology is a strong ecosystem. Without the right tools and community, a new programming language will never achieve the uptake required to become a success. That’s why it’s so important that Kotlin is built into the Java ecosystem rather than outside of it.

    Kotlin works seamlessly with Maven and Gradle, which are two of the most reliable and mature build tools in the industry. Unlike other programming languages that attempted to separate from the JVM ecosystem by reinventing dependency management, Kotlin is leveraging the virtues of Java for it's tooling. There are attempts to create Kotlin-based build tools, which would be a great addition to the Kotlin ecosystem, but they aren't a prerequisite for being productive with the language.

    Kotlin also works seamlessly with popular JVM web frameworks like Spring and Vert.x. You can even create a new Kotlin-based Spring Boot application from the Spring Initializer web app. There has been a huge increase in adoption of Kotlin for apps generated this way.

    Kotlin has great IDE support too, thanks to it's creators. The best way to learn Kotlin is by pasting some Java code into IntelliJ and allowing the IDE to convert it to Kotlin code for you. All of these pieces come together to make a recipe for success. Kotlin is poised to attract both new and old Java developers because it's built on solid ground.

    If you want to see how well Kotlin fits into existing Java tooling, try deploying a sample Kotlin application on Heroku using our Getting Started with Kotlin guide. If you're familiar with Heroku, you'll notice that it looks a lot like deploying any other Java-based application on our platform, which helps make the learning curve for Kotlin relatively flat. But why should you learn Kotlin?

    Why Kotlin?

    Heroku already supports five JVM languages that cover nearly every programming language paradigm in existence. Do we need another JVM Language? Yes. We need Kotlin as an alternative to Java just as we needed Java as an alternative to C twenty years ago. Our existing JVM languages are great, but none of them have demonstrated the potential to become the de facto language of choice for a large percentage of JVM developers.

    Kotlin has learned from the JVM languages that preceded it and borrowed the best parts from those ecosystems. The result is a well round, powerful, and production-ready platform for your apps.

  • Habits of a Happy Node Hacker 2017 (Heroku)
    14 Jun 2017 15:50

    It’s been a little over a year since our last Happy Node Hackers post, and even in such a short time much has changed and some powerful new tools have been released. The Node.js ecosystem continues to mature and new best practices have emerged.

    Here are 8 habits for happy Node hackers updated for 2017. They're specifically for app developers, rather than module authors, since those groups have different goals and constraints:

    1. Lock Down Your Dependency Tree

    In modern Node applications, your code is often only the tip of an iceberg. Even a small application could have thousands of lines of JavaScript hidden in node_modules. If your application specifies exact dependencies in package.json, the libraries you depend on probably don’t. Over time, you'll get slightly different code for each install, leading to unpredictability and potentially introducing bugs.

    In the past year Facebook surprised the Node world when it announced Yarn, a new package manager that let you use npm's vast registry of nearly half a million modules and featured a lockfile that saves the exact version of every module in your dependency tree. This means that you can be confident that the exact same code will be downloaded every time you deploy your application.

    Not to be outdone, npm released a new version with a lockfile of its own. Oh, and it's a lot faster now too. This means that whichever modern package manager you choose, you'll see a big improvement in install times and fewer errors in production.

    To get started with Yarn, install it and run yarn in your application’s directory. This will install your dependencies and generate a yarn.lock file which tells Heroku to use Yarn when building your application.

    To use npm 5, update locally by running npm install -g npm@latest and reinstall your application's dependencies by running rm -rf node_modules && npm install. The generated package-lock.json will let Heroku know to use npm 5 to install your modules.

    2. Hook Things Up

    Lifecycle scripts make great hooks for automation. If you need to run something before building your app, you can use the preinstall script. Need to build assets with grunt, gulp, browserify, or webpack? Do it in the postinstall script.

    In package.json:

    "scripts": {
      "postinstall": "grunt build",
      "start": "node app.js"
    }
    

    You can also use environment variables to control these scripts:

    "postinstall": "if $BUILD_ASSETS; then npm run build-assets; fi",
    "build-assets": "grunt build"
    

    If your scripts start getting out of control, move them to files:

    "postinstall": "scripts/postinstall.sh"
    

    3. Modernize Your JavaScript

    With the release of Node 8, the days of maintaining a complicated build system to write our application in ES2015, also known as ES6, are mostly behind us. Node is now 99% feature complete with the ES2015 spec, which means you can use new features such as template literals or destructuring assignment with no ceremony or build process!

    const combinations = [
      { number: "8.0.0", platform: "linux-x64" },
      { number: "8.0.0", platform: "darwin-x64" },
      { number: "7.9.0", platform: "linux-x64" },
      { number: "7.9.0", platform: "darwin-x64" }
    ];
    
    for (let { number, platform } of combinations) {
      console.log(`node-v${number}-${platform}.tar.gz`);
    }
    

    There are a ton of additions, and overall they work together to significantly increase the legibility of JavaScript and make your code more expressive.

    4. Keep Your Promises

    Beyond ES2015, Node 8 supports the long-awaited async and await keywords without opting in to experimental features. This feature builds on top of Promises allowing you to write asynchronous code that looks like synchronous code and has the same error handling semantics, making it easier to write, easier to understand, and safer.

    You can re-write nested callback code that looks like this:

    function getPhotos(fn) {
      getUsers((err, users) => {
        if (err) return fn(err);
        getAlbums(users, (err, albums) => {
          if (err) return fn(err);
          getPhotosForAlbums(albums, (err, photos) => {
            if (err) return fn(err);
            fn(null, photos);
          });
        });
      });
    }
    

    into code that reads top-down instead of inside-out:

    async function getPhotos() {
      const users = await getUsers();
      const albums = await getAlbums(users);
      return getPhotosForAlbums(albums);
    }
    

    You can call await on any call that returns a Promise. If you have functions that still expect callbacks, Node 8 ships with util.promisify which can automatically turn a function written in the callback style into a function that can be used with await.

    5. Automate Your Code Formatting with Prettier

    We’ve all collectively spent too much time formatting code, adding a space here, aligning a comment there, and we all do it slightly different than our teammate two desks down. This leads to endless debates about where the semicolon goes or whether we should use semicolons at all. Prettier is an open source tool that promises to finally eliminate those pointless arguments for good. You can write your code in any style you like, and with one command it’s all formatted consistently.

    prettier

    That may sound like a small thing but freeing yourself from arranging whitespace quickly feels liberating. Prettier was only released a few months ago, but it's already been adopted by Babel, React, Khan Academy, Bloomberg, and more!

    If you hate writing semicolons, let Prettier add them for you, or your whole team can banish them forever with the --no-semi option. Prettier supports ES2015 and Flow syntax, and the recent 1.4.0 release added support for CSS and TypeScript as well.

    There are integrations with all major text editors, but we recommend setting it up as a pre-commit hook or with a lifecycle script in package.json.

    "scripts": {
      "prettify": "prettier --write 'src/**/*.js'"
    }
    

    6. Test Continuously

    Pushing out a new feature and finding out that you've broken the production application is a terrible feeling. You can avoid this mistake if you’re diligent about writing tests for the code you write, but it can take a lot of time to write a good test suite. Besides, that feature needs to be shipped yesterday, and this is only a first version. Why write tests that will only have to be re-written next week?

    Writing unit tests in a framework like Mocha or Jest is one of the best ways of making sure that your JavaScript code is robust and well-designed. However there is a lot of code that may not justify the time investment of an extensive test suite. The testing library Jest has a feature called Snapshot Testing that can help you get insight and visibility into code that would otherwise go untested. Instead of deciding ahead of time what the expected output of a function call should be and writing a test around it, Jest will save the actual output into a local file on the first run, and then compare it to the response on the next run and alert you if it's changed.

    jest-snapshot-testing

    While this won't tell you if your code is working exactly as you'd planned when you wrote it, this does allow you to observe what changes you're actually introducing into your application as you move quickly and develop new features. When the output changes you can quickly update the snapshots with a command, and they will be checked into your git history along with your code.

    it("test /endpoint", async () => {
      const res = await request(`http://0.0.0.0:5000/endpoint`);
      const body = await res.json();
      const { status, headers } = res;
      expect({ status, body, headers }).toMatchSnapshot();
    });
    

    Example Repo

    Once you've tested your code, setting up a good CI workflow is one way of making sure that it stays tested. To that end, we launched Heroku CI. It’s built into the Heroku continuous delivery workflow, and you'll never wait for a queue. Check it out!

    Don't need the fancy features and just want a super simple test runner? Check out tape for your minimal testing needs.

    7. Wear Your Helmet

    For web application security, a lot of the important yet easy configuration to lock down a given app can be done by returning the right HTTP headers.

    You won't get most of these headers with a default Express application, so if you want to put an application in production with Express, you can go pretty far by using Helmet. Helmet is an Express middleware module for securing your app mainly via HTTP headers.

    Helmet helps you prevent cross-site scripting attacks, protect against click-jacking, and more! It takes just a few lines to add basic security to an existing express application:

    const express = require('express');
    const helmet = require('helmet');
    
    const app = express();
    
    app.use(helmet());
    

    Read more about Helmet and other Express security best practices

    8. HTTPS all the things

    By using private connections by default, we make it the norm, and everyone is safer. As web engineers, there is no reason we shouldn’t default all traffic in our applications to using HTTPS.

    In an express application, there are several things you need to do to make sure you're serving your site over https. First, make sure the Strict-Transport-Security header (often abbreviated as HSTS) is set on the response. This instructs the browser to always send requests over https. If you’re using Helmet, then this is already done for you!

    Then make sure that you're redirecting any http requests that do make it to the server to the same url over https. The express-enforce-ssl middleware provides an easy way to do this.

    const express = require('express');
    const expressEnforcesSSL = require('express-enforces-ssl');
    
    const app = express();
    
    app.enable('trust proxy');
    app.use(expressEnforcesSSL());
    

    Additionally you'll need a TLS certificate from a Certificate Authority. But if you are deploying your application to Heroku and using any hobby or professional dyno, you will automatically get TLS certificates set up through Let’s Encrypt for your custom domains by our Automated Certificate Management – and for applications without a custom domain, we provide a wildcard certificate for *.herokuapp.com.

    What are your habits?

    I try to follow these habits in all of my projects. Whether you’re new to node or a server-side JS veteran, I’m sure you’ve developed tricks of your own. We’d love to hear them! Share your habits by tweeting with the #node_habits hashtag.

    Happy hacking!

  • Announcing Release Phase: Automatically Run Tasks Before a New Release is Deployed (Heroku)
    08 Jun 2017 15:37

    You’re using a continuous delivery pipeline because it takes the manual steps out of code deployment. But when a release includes updates to a database schema, the deployment requires manual intervention and team coordination. Typically, someone on the team will log into the database and run the migration, then quickly deploy the new code to production. It's a process rife with deployment risk.

    Now with Release Phase, generally available today, you can define tasks you need to run before a release is deployed to production. Simply push your code and Release Phase will automatically run your database schema migration, upload static assets to a CDN, or any other task your app needs to be ready for production. If a Release Phase task fails, the new release is not deployed, leaving the production release unaffected.

    To get started, view the release phase documentation.

    release-phase-diagram-3

    A Release Phase Example

    Let’s say you have a Node.js app, using Sequelize as your ORM, and want to run a database migration on your next release. Simply define a release command in your Procfile:

    release: node_modules/.bin/sequelize db:migrate
    web: node ./bin/www
    

    When you run git push heroku master, after the build is successful, Release Phase begins the migration via a one-off dyno. If the migration is successful, the app code is deployed to production. If the migration fails, your release is not deployed and you can check your Release Phase logs to debug.

    
    $ git push heroku master
    ... 
    Running release command….
    --- Migrating Db ---
    Sequelize [Node: 7.9.0, CLI: 2.7.9, ORM: 3.30.4]
    
    Loaded configuration file "config/config.json".
    Using environment "production".
    == 20170413204504-create-post: migrating ======
    == 20170413204504-create-post: migrated (0.054s)
    
    V23 successfully deployed 
    

    Check out the video to watch it in action:

    Heroku Flow + Release Phase

    Heroku Flow provides you with a professional continuous delivery pipeline with dev, staging, and production environments. When you promote a release from staging to production, Release Phase will automatically run your tasks in the production environment.

    Screen Shot 2017-05-09 at 10

    With Heroku Flow you always knows where a particular feature is on the path to production. Now -- with Release Phase -- the path to production has even fewer manual steps.

  • Introducing Heroku Shield: Continuous Delivery for High Compliance Apps (Heroku)
    06 Jun 2017 12:45

    Today we are happy to announce Heroku Shield, a new addition to our Heroku Enterprise line of products. Heroku Shield introduces new capabilities to Dynos, Postgres databases and Private Spaces that make Heroku suitable for high compliance environments such as healthcare apps regulated by the Health Insurance Portability and Accountability Act (HIPAA). With Heroku Shield, the power and productivity of Heroku is now easily available to a whole new class of strictly regulated apps.

    At the core of Heroku’s products is the idea that developers can turn great ideas into successful customer experiences at a surprising pace when all unnecessary and irrelevant elements of application infrastructure are systematically abstracted away. The design of Heroku Shield started with the question: what if regulatory and compliance complexity could be transformed into a simple developer experience, just as has been done for infrastructure complexity? The outcome is a simple, elegant user experience that abstracts away compliance complexity while freeing development teams to use the tools and services they love in a new class of app.

    Heroku Shield is generally available to Heroku Enterprise customers. For more information about Heroku Enterprise, please contact us here.

    How it Works

    shield-private-space-blog

    Shield Private Spaces

    To use Heroku Shield, start by creating a new Private Space and switch on the Shield option. The first thing you notice is that logging is now configured at the space level. With Private Space Logging, logs from all apps and control systems are automatically forwarded to the logging destination configured for the space. This greatly simplifies compliance auditing while still leaving the developers in full control of app configuration and deployment.

    Shield Private Spaces also adds a critical compliance feature to the heroku run command used by developers to access production apps for administrative and diagnostic tasks. In a Shield Private Space, all keystrokes typed in an interactive heroku run session are logged automatically. This meets a critical compliance requirement to audit all production access but without restricting developers from doing diagnostics and time sensitive remediation tasks directly on production environments.

    Shield Private Dynos and Postgres

    In a Shield Private Space you can create special Shield flavors of Dynos and Postgres databases. The Shield Private Dyno includes an encrypted ephemeral file system and restricts SSL termination from using TLS 1.0 which is considered vulnerable. Shield Private Postgres further guarantees that data is always encrypted in transit and at rest. Heroku also captures a high volume of security monitoring events for Shield dynos and databases which helps meet regulatory requirements without imposing any extra burden on developers.

    App Innovation for Healthcare and Beyond

    With Heroku Shield, you can now build healthcare apps on Heroku that are capable of handling protected health information (PHI) in compliance with the United States HIPAA framework. The healthcare industry is living proof of how challenging it is to modernize application delivery while meeting strict compliance requirements. All you have to do is compare the user experience of most healthcare apps with what you have come to expect from apps in less regulated industries like e-commerce, productivity and social networks.

    It's simply too hard to evolve and modernize healthcare apps today because they are delivered using outdated, rigid platforms and practices. At Heroku, we are doing our small part to change this by providing development teams a HIPAA-ready platform with the industry's best Continuous Delivery Experience.

    Of course, this is just a step on our trust journey - the work of providing more security and compliance capabilities is never complete. We are already working on new capabilities and certifications for Heroku Shield, and as always look to our customers and the developer community for input on how to direct and prioritize those efforts.

    Summary

    The opportunity to combine developer creativity with the opportunities for innovation in high compliance industries is powerful and potent. Heroku has had the privilege to see the possibilities that result from removing obstacles from developers, and with Shield, hope to see that promise amplified yet again. For more information on Shield, see the Dev Center article here, or contact Heroku.

  • Announcing DNS Service Discovery for Heroku Private Spaces: Microservices Communication, Made Easy (Heroku)
    31 May 2017 15:38

    Today, we are excited to announce DNS Service Discovery for Heroku Private Spaces, an easy way to find and coordinate services for microservice-style deployments.

    As applications grow in sophistication and scale, developers often organize their applications into small, purpose-built “microservices”. These microservice systems act in unison to achieve what otherwise would be handled by a single, larger monolithic application, which serves the benefit of simplifying applications’ codebases and improving their overall reliability.

    DNS Service Discovery is a valuable component of a true microservices architecture. It is a simple, yet effective way to facilitate microservice-style application architecture on Private Spaces using standard DNS naming conventions. As a result, your applications can now know in advance how they should reach the other process types and services needed to do their job.

    How It Works

    dns-discovery-blog

    DNS Service Discovery allows you to connect these services together by providing a naming scheme for finding individual dynos within your Private Space. Every process type for every application in the Space is configured to respond to a standard DNS name of the format <process-type>.<application-name>.app.localspace.

    Example:

    $ nslookup web.myapp.app.localspace
    web.myapp.app.localspace. 0 IN A 10.10.10.11
    web.myapp.app.localspace. 0 IN A 10.10.10.10
    web.myapp.app.localspace. 0 IN A 10.10.10.9
    

    This is enabled by default on all newly created applications in Private Spaces. For existing Private Spaces applications, you need to run:

    $ heroku features:enable spaces-dns-discovery --app <app name>
    

    When combined with Heroku Flow’s continuous delivery approach, the benefits of a microservices architecture are further realized. For example, in a distributed system, each application can have a smaller footprint and a more focused purpose - so when it comes time to push updates to this system, your team can modify and continuously deliver a single portion of your architecture, instead of having to cycle out the entirety of your application. And when your application’s traffic grows, you can scale up the just the portion of your system that requires extra cycles, resulting in a more flexible and economical use of resources.

    Learn More

    We’re excited to see the new possibilities Service Discovery opens up for microservices architectures. If you are interested in learning more about DNS Service Discovery for your applications in Private Spaces, please check out our Dev Center article or contact us with further questions.

  • Announcing Platform API for Partners (Heroku)
    25 May 2017 15:34

    Heroku has always made it easy for you to extend your apps with add-ons. Starting today, partners can access the Platform API to build a more secure and cohesive developer experience between add-ons and Heroku.

    Advancing the Add-on User Experience

    Several add-ons are already using the new Platform API for Partners. Adept Scale, a long-time add-on in our marketplace that provides automated scaling of Heroku dynos, has updated its integration to offer a stronger security stance, with properly scoped access to each app it is added to. Existing customer integrations have been updated as of Friday May 12th. All new installs of Adept Scale will use the more secure, scoped Platform API.

    Opbeat, a performance monitoring service for Node.js developers, is using the Platform API in production to sync their user roles to match Heroku. It is also synchronizing metadata, so that its data stays in sync with Heroku when users make changes, for instance renaming a Heroku app. This connection enables a more cohesive experience between the two tools.

    We have a list of standard endpoints that partners can use documented in the Dev Center, with more functionality coming soon. For new integrations that may require additional endpoints, we ask partners to reach out to us directly about making specific endpoints from the Platform API available. Please contact us with information about your intended integration.

    As add-on partner adoption of the Platform API grows, Heroku customers can expect to see a more cohesive, reliable and secure developer experience when using add-ons, and a wider range of add-on offerings in our Elements marketplace.

  • Heroku CI Is Now Generally Available: Fast, Low Setup CI That’s Easy to Use (Heroku)
    18 May 2017 15:44

    Today we are proud to announce that Heroku CI, a low-configuration test runner for unit and browser testing that is tightly integrated with Heroku Pipelines, is now in General Availability.

    Tests@2x

    To build software with optimal feature release speed and quality, continuous integration (CI) is a popular and best practice, and is an essential part of a complete continuous delivery (CD) practice. As we have done for builds, deployments, and CD, Heroku CI dramatically improves the ease, experience, and function of CI. Now your energy can go into your apps, not your process.

    With today's addition of Heroku CI, Heroku now offers a complete CI/CD solution for developers in all of our officially supported languages: Node, Ruby, Java, Python, Go, Scala, PHP, and Clojure. As you would expect from Heroku, Heroku CI is simple, powerful, visual, and prescriptive. It is intended to provide the features and flexibility to be the complete CI solution for the vast majority of application development situations, serving use cases that range from small innovation teams, to large Enterprise projects.

    Easy to Setup and Use

    Setup@2x

    Configuration of Heroku CI is quite low (or none). There is no IT involved; Heroku CI is automatically available and coordinated for all apps in Heroku Pipelines. Just turn on Heroku CI for the Pipeline, and each push to GitHub will run your tests. Tests reside in the location that is the norm typical for each supported language, for example: test scripts in Go typically reside in the file named "function_test.go". These tests are executed automatically on each git push. So no learning curve is involved, and little reconfiguration is typically necessary when migrating to Heroku CI from Jenkins and other CI systems.

    For users who are also new to continuous delivery, we've made Heroku Pipelines set-up easier than ever with a straightforward 3-step setup that automatically creates and configures your review, development, staging, and production apps. All that's left is to click the "Tests" tab and turn on Heroku CI.

    Visual at Every Stage

    Pipeline@2x

    From setup, to running tests, to CI management, everything about Heroku CI is intended to be fully visual and intuitive -- even for users who are new to continuous integration. For each app, the status of the latest or currently running test run is shown clearly on the Pipelines page. Test actions are a click away, and fully available via the UI: re-run any test, run new tests against an arbitrary branch, search previous tests by branch or pull request, and see full detail for any previous test. And Heroku CI integrates seamlessly with GitHub - on every git push your tests run, allowing you to also see the test result within GitHub web or GitHub Desktop interfaces.

    CI users who want more granular control, direct debug access, and programmatic control of CI actions can use the CLI interface for Heroku CI.

    Power, Speed, and Flexibility

    For every test you run, Heroku CI creates and populates an ephemeral app environment that mirrors your Staging and Production environments. These CI apps are created automatically, and then destroyed immediately after test runs complete. All the add-ons, databases, and configurations your code requires are optimized for test speed, and parity with downstream environments. Over the beta period, we have been working with add-on partners to make sure the CI experience is fast and seamless.

    Setup and tear-down for each CI run happens in seconds. Because we use these ephemeral Heroku apps to run your tests, there is no queue time (as is common with many CI systems). Your tests run immediately, every time on dedicated Performance dynos.

    Across the thousands of participants in our public beta, most developers observed test runs completing significantly faster than expectations.

    Cost-effective

    We view CI as an essential part of effective development workflows, that is, part of good overall delivery process.

    Each CI-enabled Heroku Pipeline is charged just $10/month for an unlimited number of test runs. For each test run, dyno charges apply only for the duration of tests. We recommend and default to Performance-M dynos to power test runs, and you can specify other dyno sizes.

    Note that all charges are pro-rated per second, with no commitment, so you can try out Heroku CI for pennies -- usually with little modification to your existing test scripts.

    Enterprise-ready

    All Heroku Enterprise customers get unlimited CI-enabled Pipelines, and an unlimited number of test runs, all, of course, with zero queue time. No provisioning, authentication set-up, or management of CI is required for new projects, and Heroku CI can be turned on for any Heroku Pipeline with a single click.

    Existing Heroku Enterprise dyno credits are automatically used for test runs, and invoices will contain a new section listing the CI-enabled Pipelines alongside the account-wide dyno usage for CI test runs.

    All test run results are available at permanent URLs that can be referenced for compliance regimes, and all authentication is managed under existing Heroku Enterprise Teams (Org) security. Unification of security, authentication, billing between CI and production deployments, along with a prescriptive methodology across company projects, lets Enterprises innovate on Heroku with the agility of a start-up.

    Heroku-built, Community-hardened

    Some terms are not usually associated with CI systems: we think Heroku CI is among the most pleasant, beautiful software testing systems available -- and we have you to thank for this. More than 1500 beta users tested Heroku CI, surfacing bugs, offering suggestions; telling us that some webhooks got dropped, that an icon on the tab might be nice, that it should be more obvious how to re-run a test ... and roughly 600 other notes, many of which grew into e-mail conversations with you. As is the case with all software: we will still be perfecting. And we are pretty proud of what we have here. Thank you, and keep the comments coming!

    Get Started

    It's easy. Set-up a Heroku Pipeline and you're ready. There's even a two-minute video here and a simple how-to. Give it a spin, and let us know what you think.

  • The Future of Ember.js: An Interview With Tom Dale at EmberConf - Part Two (Heroku)
    11 May 2017 15:25

    This is the second of a two-part transcript from a recent interview with Tom Dale of Ember.js. In part one we discussed the history and direction of the Ember.js project. Continuing the discussion of the future for Ember.js, this post includes the rest of the interview, primarily focused on the Glimmer.js project. Some of the questions were omitted from these transcriptions for brevity, so we’re also releasing the nearly hour long audio file of the entire interview. Enjoy!

    Jonan: Let’s talk about Glimmer 2. If I understand correctly it's released now and it entirely supplants Ember. So how are you planning to gracefully sunset the project?

    Terence: I think locks (Ricardo Mendes) talked about how people already have five years of Ember experience, they can now move on to this Glimmer thing, right?

    Tom: That's right, yeah. You can put five years of Glimmer experience on your resume, on your LinkedIn profile. You know, something we really wanted to be mindful of is that it's really easy to think that we're giving up on Ember or that we just, declared bankruptcy and we’re starting over again fresh. Because actually, this is what happens in the JavaScript community all the time, right? New version, backwards incompatible, we decided that it was just too clunky to ever fix.

    Terence: Right. Angular 1 and Angular 2 thing?

    Tom: Something like that, right?

    Jonan: And in some cases, that's the right choice.

    Terence: Yeah, I think it is.

    Jonan: In the first version, there were mistakes made, let's move on. There is no right choice in those circumstances. Maybe it's the only choice that you have.

    Tom: Right. So Glimmer is a little bit different. The first thing to understand is that Glimmer is an extraction, it's not a brand-new library.

    One piece of feedback that we get all the time is people say, "You know, I would, theoretically, be interested in using Ember but I don’t need all of that stuff. I don’t need a router, I don’t need a data layer. I just want components. I have a Rails app and I just wanna do some kind of interactive widget." People use JQuery, they use React, things that are really easy to drop in, super simple to learn.

    So, we thought about it and said, "Well, you know, we actually have this awesome rendering library in this engine called Glimmer. Why don’t we make it available to people who don’t wanna buy into Ember?"

    You know it shouldn’t be an all-or-nothing thing. We should try to think about how we can bring incremental value to people. So that's one. It's not a new project. It's an extraction.

    The other thing is that I don’t think about Glimmer as being a different project. I think about Glimmer as being a way for us to experiment with the future of the component API in Ember. So one thing that we're working on right now, and actually there is an RFC written by Godfrey Chan, is an API that lets people write plug-ins that implement different component APIs.

    Remember, LinkedIn is an Ember app. It’s an Ember app that has had a lot of money and a lot of work put into it, and I promise you, we're not just gonna throw that away and rewrite it in Glimmer.

    So we really need to focus on the experience of taking Glimmer components and bringing them into an Ember app; that's what we're working on right now. Glimmer components, I think of it as the future of the Ember component API.

    What I would really love is that people can start working on Glimmer applications, see that it has this beautiful UI, it's fast, it's slick, all these things. Then they realize, "Hey, actually, maybe I need Ember data, maybe I need a router?" And then what they'll do is just take these Glimmer components, drag and drop them into their Ember app and, boom, they just work without having to change a line of code.

    Jonan: Ember includes a lot of things. It's prepared to handle problems that you can't foresee yet which is one of the benefits of using a framework. But that means that it's larger than maybe you need in the moment. So I could start with a very small Glimmer app, and Glimmer itself is small, right?

    Tom: Yeah, it's really small.

    Jonan: So the advantage right now though, because we don’t have that component transferability, we can't just take an Ember component and take it into Glimmer today, is that it’s small. You described it as useful for a mobile application. The example you gave in the keynote was a temperature widget that gave the temperature in various cities.

    Give us like a real-world use case of Glimmer.

    Tom: Sure. I mean I can give you a very real-world use case, which is that before I joined LinkedIn, I was working at this really awesome little startup in New York called Monograph. One of the things Monograph was doing was building these e-commerce apps that were designed to integrate into social media apps.

    So one thing that's really amazing about the web that you can't do on native, is you can actually run inside of other apps. You can run inside of Facebook, you can run inside of Twitter, you can run inside of any social media app and that's something that native apps can't do.

    What we wanted to do was build this experience that felt very native but it also had to load really, really fast. Because you didn’t get to load it until a user tapped the link. So we actually tried to build a few prototypes in Ember and they actually worked really great on the iPhone, but then we had some investors in Australia on Android phones, and when they tapped on it, it took like 10 seconds to load. That's just not acceptable when you're trying to target these mobile apps.

    I said "We have this great entry engine in Ember and it's really small. I wonder if I can hack together something using this?" And the reality was that if we couldn’t, I was gonna have to use something like React or something.

    So I told my boss "Give me a week. Give me a week to see if I can do it in Glimmer. I have this crazy idea, let's see if we can do it.", and we actually pulled it off.

    We've run a few campaigns now and things have sold out in like an hour. So the model works.

    I think if you're building an app that needs a router and uses a data layer, yeah, you should be absolutely using Ember. This is definitely a pared down experience, but my hope is that we're gonna figure out ways of taking these big apps and kind of slicing them up in a way that will be good for mobile.

    Jonan: I just want to make sure I’ve got the technical details right here. Ember rendered JavaScript in the past, and now it is rendering bytecode: a series of opcodes that are interpreted on the client side by a couple of different VMs. You have an update VM and you have a render VM. So the first time that you load up a page, you're gonna send over some bytecode and that's gonna be interpreted by this TypeScript VM, the render VM, and then the updates will come into the update VM in the next round? Okay. And so the actual content of this bytecode, what is that?

    Tom: It's a JSON object. The reason for that is, of course, JSON is a much smaller subset of the JavaScript language. So they're more compact and they're much faster to parse.

    Modern JavaScript VMs are very fast and very good at doing just-in-time compilation to native code. So if we emit JavaScript, those will get compiled into these really fast operations.

    The problem that we didn’t realize at the time was that when you have apps that grow, that is a lot of JavaScript, and all that JavaScript gets parsed eagerly. Now you are in the situation where you're spending all this time parsing JavaScript. For some parts of that page, it doesn’t need to get parsed because it never gets rendered.

    Jonan: So in the olden days, again, I need to simplify this for my own thinking here. I have a page with a home and an about page, right?

    Tom: Mm-hmm.

    Jonan: And I don’t ever click on the about tab. But that JavaScript is still there.

    Tom: It's still loaded.

    Jonan: And it's still interpreted.

    Tom: Still parsed, right.

    Jonan: And it's not necessary, right?

    Tom: Right.

    Jonan: So now, in this new world, the JSON blob that represents that about page, if the user never clicks on that link, it never actually has to get turned into anything.

    Tom: Right. We keep it in memory, resident in memory as a string, and we just-in-time JSON parse it. And of course, the JSON parsing is gonna be faster than the JavaScript parsing because of the fact that it's so restricted.

    Jonan: I see. And so then, you can take that JSON and turn directly into the page that you need. There's no other step there?

    Tom: Right.

    Jonan: I see, okay.

    Tom: So one of the advantages of Handlebars, you know, there's kind of this raging debate about if you should use templates like Ember, and Glimmer, and View, JSDO or should you use JSX like React and all of the stuff React likes.

    One of the really nice things about Handlebars is that it's very statically analyzable. We can very easily analyze your template and say, "Okay, these are the components you're using. These are the helpers that you're using. This is the structure." That lets us do more work when you deploy, when you build and that means less work on each user's browser.

    Jonan: Right, but then also, as you talked about in your keynote, maybe it comes at the expense of file size in some cases which is another problem that Glimmer solves. Because what you're sending over the wire is actually much smaller now.

    Tom: Right. So that was the other downside of doing the compilation with the JavaScript and that’s just the representation. I mean think about JavaScript as a syntax you have, you have functions, and you have var, and you have all of these different things. By creating our JSON structure, we can get that down.

    Jonan: So, we've been talking about progressive web apps a lot this year, and there are a lot of things that have happened recently that enabled progressive web apps to actually be a thing. We can now say reliably that you can present an offline experience that is close to the real experience with the possible exception of mobile Safari. I've heard that that's a popular browser.

    Tom: It is.

    Jonan: Something like 55% of the U.S. market is using an iPhone.

    Tom: That's right.

    Jonan: So they don’t have service workers that's the problem here, right? I wanna just explain real quick. A service worker, for my own thinking, is this thread that I can run in the background, and I can schedule tasks on it. So it doesn’t even mean you have to have my page open, right?

    Tom: Right.

    Jonan: I can go and refresh the data that I'm caching locally.

    Tom: The most important thing about the service worker, from my perspective, the thing that it unlocked in terms of taking something that usually only the browser can do, is now giving me, as a JavaScript programmer, access to intercepting network requests.

    Not just JavaScript but literally, I can have a service worker and if I put an image tag on my page and my service worker is consulted saying, "Hey, we're about to go fetch this image. Would you like to give me a version from cache?"

    That is hugely powerful when you're talking about building an offline experience. Because now you have programmatic access to the browser cache, to the way it looks at resources. So now, you have this very powerful abstraction for building whatever caching you want offline.

    Jonan: So whatever possible request could be coming from your application is more or less proxied through this server?

    Tom: Exactly. So in addition to their request, you also have access to browser cache. So you can put things in, you can take things out. That's what lets you program very specific rules. Because you don’t always wanna say use from the cache, right? Sometimes, there are things that you actually want like how many items in inventory remain, right? You probably don’t want that cached. You probably wanna have the most updated information possible.

    Jonan: We don’t have service workers in Safari and we won't for the foreseeable future.

    Tom: Well, we don’t have it in Safari but we have it in Firefox and we have it in Chrome. You know, the P in PWA, it stands for Progressive Web App, so you can progressively add this to your application. You know I think the best way to get features into a browser is to adopt them and say "Hey, if you're using an Android phone, you have this really awesome experience. But if you have an iPhone, you know, maybe it's not as awesome."

    Apple, I truly believe, really cares about the user experience. If there's one thing I've got from the Safari team is that they always prioritize making a feature fast and not draining the user's battery over being able to check a check mark.

    So I actually have a lot of respect for their position, and I think if they do service workers, they're going to do it right. If they see that people are having a better user experience on an Android phone than an iPhone that is hugely motivating for them.

    Terence: Does the service worker impact the batteries on phones?

    Tom: It could, it could, yeah. I think what browser vendors are going to have to figure out is what is the right heuristic for making sure that we can run a service worker, but only in service of the user, pardon the pun.

    How do we make sure that people aren't using it maliciously? How do I make sure this website is not mining bitcoin on your phone and now your battery life is two hours, you know?

    Jonan: Sure, yeah.

    Tom: It's a really tricky problem.

    Jonan: Even if they're relatively innocuous. They don’t necessarily need to be malicious. If you've got a hundred of them and they're all just trying to fetch the same image online, this will dramatically impact your phone's performance.

    Tom: Yeah, absolutely. Or if you think about, you know, you install a native app and all of a sudden, you start getting push notifications, that's not great for your battery life either.

    Terence: I guess, you talked about progressive web apps in last year’s keynote, what has the uptake been since then? I know it was kind of a work in progress kind of thing, and we just saw two talks yesterday related to progressive web apps.

    Tom: Yup.

    Terence: So has the adoption been pretty strong within the community?

    Tom: Yeah, absolutely. I think people are really excited about it. I think there are so many aspects to progressive web apps, and I think the definition isn't clear exactly. It's one of these terms that people talk about. Sometimes, it becomes more of a buzzword than a very concrete thing. There are a lot of things that you can do on the path to a progressive web app.

    So service worker, as Jonan said, is the one thing that people think about the most, but there are also things like server-side rendering, to make sure that the first few bytes that you sent to the user are in service of getting content in front of them. Not just loading your dependency injection library.

    Jonan: Right.

    Tom: You really wanna get the content first. There's the ability to run offline, there's the ability to add to the home screen as a first-class icon, the ability to do push notifications.

    Jonan: Removing the browser chrome, making it feel like a native app experience.

    Tom: Yup, and actually, Android has done some really awesome work here to make a progressive web integrate into the operating system such that as a user you can't really tell. That’s the dream.

    Jonan: Yeah, of course.

    Tom: The community uptake has been phenomenal, and this is exactly one of those things where it's gonna require experimentation. This is a brand new thing. People don’t know the optimal way to use it yet and that experimentation is happening.

    There are a ton of Ember add-ons: there are service worker add-ons, there are add-ons for adding an app manifest so you get the icon. All sorts of cool stuff.

    I think what we should start thinking about is, "Okay, well what is the mature part of this that we can start baking into the default experience when we make a new Ember app, such that you get a PWA for free?", and I would guess that we are probably on the way there, sometime this year or early next year. Saying that "You just get this really awesome PWA out of the box when you make a new Ember app."

    Jonan: That will be fantastic. I would like that very much.

    Tom: Defaults are important. I think if you care about the web, especially the mobile web being fast, the highest impact thing you can do is find out what developers are doing today and make the default the right thing.

    Terence: So do you imagine in the next couple years, PWA and FastBoot are just going to be baked into new Ember apps?

    Tom: I certainly hope so. I don’t think we want to do it before it's ready. FastBoot, in particular of course, introduces a server-side dependency.

    One thing that people really like about client side apps is that I don’t need to run my own server, I can just upload to some kind of CDN. That's nice, I don’t like doing ops. That's why I use Heroku so I don’t have to think about ops. So that's the hard thing about server-side rendering, it does introduce computation requirements when you deploy.

    So I don’t know if FastBoot will ever be the default per se, but I do know that I want to make it really easy and at least give people the option.

    "Hey, server-side rendering is really important for many kinds of apps. Do you wanna turn it on?" The PWA stuff, I think we can do it within the existing parameters of being able to do static deploys, so yeah, let's do it.

    Terence: If you have FastBoot on the app it’s totally optional though right?

    Tom: Yes, totally optional.

    Terence: You can still deploy the assets and ignore FastBoot completely, even if it was part of the standard app, right?

    Tom: That's true. Yeah, that's true, and really that, I think, is the beauty of client-side architecture plus server-side rendering. "Oh, my server is over capacity." Well, you can just have your load balancer fall back to the static site, and maybe the user doesn’t get the first view as fast but they still get the full experience.

    So much of what FastBoot is, is this conventional way of having not just the server-side rendering but also having a good user experience around it. So much of that relies on the good bits of Ember, the very conventional structure. So I think Glimmer will rapidly support server-side rendering but massaging that into an easy-to-use thing is, I think, an Ember responsibility.

    Jonan: The VMs that we're talking about, with the Glimmer on the frontend, the updated render VMs, are written in TypeScript.

    Tom: That's right.

    Jonan: You mentioned during your keynote that there were some features you added to TypeScript 2.2, or worked with the TypeScript team to add to TypeScript 2.2. and 2.3, to enable Glimmer? Is that me misunderstanding something?

    Tom: It's not enabling Glimmer per se, because Glimmer 2 from the beginning has been written in TypeScript. I think when they started TypeScript was on 1.8, so when you make a new Glimmer app, the default is to get TypeScript. That just works out of the box, because the library is written in TypeScript you get awesome code completion, you get intellisense, you get documentation in line, all these things automatically.

    I can't say enough positive things about the TypeScript team. They are so professional, they are so responsive. We even asked Daniel Rosenwasser, who is the PM, last week "Hey do you wanna come to EmberConf next week?" "I will come, because I really want to meet the Ember community." They're really, really wonderful.

    So for Glimmer, the internals, because it's written in Typescript, there were really no problems. But the thing that they realized is, "Hey, there's actually this long tail of libraries that come from earlier versions of JavaScript like when ES3 and ES5 were kind of cutting edge, that built their own object model on top of JavaScript."

    So if you look at Ember for, example, you have the Ember objects model where you have .get and .set, and you have ember.object.extend and ember.object.create. Before we had ES6 classes, we had no choice but to build our own on top of the language. The problem is we need some way to let TypeScript know, "Hey, when we call ember.object.extend, that's not some random method, that's actually defining a type. That's defining a class."

    The TypeScript team has been really awesome saying, "Okay, how do we rationalize that and add the extension points where a system like Ember or…" I mean here's the thing. Every JavaScript library from that era has their own system like this, so they've built these really awesome primitives in TypeScript that let you express key types or mapped types.

    "Hey, when you see ember.object.extend, we're gonna pass it to POJO, Plain Old JavaScript Object as an argument. That's not just a bag of data. I want you to actually look at the keys inside of that object and treat those like types."

    So that's the thing we're really excited about because, of course, you're going to be writing Glimmer apps, you're going to be writing Glimmer components.

    You're going to get these really nice TypeScript features but then we don’t want you to have to go back to Ember code and miss those benefits.

    Jonan: That's a fantastic feature to have in a language and it's a difficult thing to bring yourself to add, I would imagine, if you're maintaining something like TypeScript. I think this is a smart way to approach the problem.

    Tom: Yes.

    Jonan: But you're looking at all of these people with their own legacy object models and I have an object model now, and I want people to use the object model that exists in this language. Right?

    Tom: Exactly, yes.

    Jonan: How do I let you also just roll your own object model? It's a pretty fundamental part of a programming language.

    Tom: It is, yeah, and that' what I mean about professionalism. I really, really appreciate the TypeScript team thinking so carefully about adoption, because I think it really requires maturity to do that. How do we bridge the gap, reach people where they are today? And then we can slowly bring them into the new, modern world as they do new things. I think that's hugely important and I think it's one thing that many people in the JavaScript community undervalue. It is such a breath of fresh air to see it from Typescript.

    Jonan: That's great.

    Terence: Yeah. It seems to align a lot with all the stuff Ember tries to do with the way it does it features.

    Jonan: So at the very end of the keynote… You ran a little long on the keynote which is a rare thing to see.

    Tom: Yeah, yeah, very rare.

    Jonan: This year, you were overtime a little bit and you flipped through some content very quickly at the end. I was hoping maybe you could give us a peek at some of those things you didn’t get time to talk about in your keynote, that you wish you had time to mention.

    Tom: I think if we had had more time, one thing I would have really loved to go into more was the Glimmer API. I see the Glimmer API for components being the future of how you do components in Ember, and we have focused really hard on making these things feel like it's just regular JavaScript.

    Like I was saying, when Ember came of age, we didn’t have the ES6 classes. We couldn't even use ES5 because it wasn't adopted enough. So we built our own object model on top.

    Then rapidly, all of a sudden, the pace of JavaScript's development picked up, and now we have classes, and we have decorators, and we have getters, and we have all these amazing new things. Because it happened right after we stabilized our API, people look at Ember sometimes and think, you know, that feels like they're doing their own weird thing and already know JavaScript. It's like "I don’t wanna do it the Ember way. I wanna do it the JavaScript way."

    So what we tried really, really hard to do with Glimmer is say, "Okay, let's think about what someone who only knows JavaScript or modern JavaScript, what do they know and what are they expecting?" And let's just make the whole thing feel easy and natural for them.

    So for example, Glimmer component when you define it is just an ES6 class that extends the Glimmer component base class. The way that you import the Glimmer component is a standard import. Then there's a proposal in JavaScript called "decorators," which I believe is stage two. That lets you add certain annotations to properties, and methods, and classes and so on.

    Now in Glimmer we have introduced something called "track properties", but more importantly in Glimmer, you don’t actually need any kind of annotation because your computed properties are just getters, which is built in the language. Of course, if you want to do change tracking like "Hey, this computed property changed, how do I update the DOM?" You have a very simple decorator. So you don’t have to have this weird Ember thing, you just do what's in the language.

    Jonan: Which is hopefully going to increase adoption.

    Tom: I hope so, yeah.

    Jonan: This is a common problem, not just in the JavaScript community. You're coming up with new frameworks and you're moving very quickly. JavaScript, in particular, is moving very quickly. It seems like every week, or month, there's some new tool that I would have to learn, right?

    Tom: Yeah.

    Jonan: Something new and each one of them has their own distinct syntax, constantly changing. If you keep moving the goal post, eventually people tire of it. I consider the approach you took with Glimmer to be a very mature approach, and I really appreciate the effort you put in to make that.

    Tom: I think when people see Glimmer, it's very easy for their reaction to be "Oh, god, here comes another JavaScript library." What I hope is that people can look at our track record, and I hope we have some credibility with people, and see that, "Hey, we're not just talking a big game here. We actually have a community that has gone back at least five years. And we have apps that are five years old that have migrated."

    So I just hope people can feel safe when they look at Glimmer. It checks all the checklists that you need to check in 2017, but it also comes with the same community and the same core team that really values stability, that values migration, that values convention.

    Jonan: And speed.

    Tom: Yeah, and speed.

    Jonan: I think speed is the real reward from Glimmer. You build something in Glimmer and you, somehow, have accomplished this impossible tradeoff where you have a fast render speed and a fast update speed.

    Tom: I think it's interesting too because, you know, this always happens with benchmarks. There's some suite of benchmarks that comes out, people become over-focused on one particular metric.

    Jonan: Right.

    Tom: In this case, the community, has really focused on, in the last year, initial render performance. Initial render performance is super, super important, but it's not always worth sacrificing updating performance. I think Glimmer has hit this really nice sweet spot where it’s not as absolutely fast as the fastest rendering library, in terms of initial rendering, but it blows away all the other rendering engines at updates.

    Being the absolute fastest at initial render is only so important, so long as the user notices. It's not worth sacrificing everything if your constant time is imperceptible to the human, and I'm really excited with that sweet spot that we've hit.

    Jonan: We were talking the other day at lunch about the fact that there are some pages where I really don’t mind a long load time. If I'm going to a dashboard for a product that I've already purchased, I'm gonna sit there and wait. Like, yeah, maybe it takes 10 seconds, right, and I'm gonna be super annoyed and think, "Wow, why am I paying these people money?" Right? But for some definition of fast, all things start to be equal, when we get down towards those lower numbers.

    Tom: That’s right and I think people conflate those. You know, it's easy to get in a Twitter flame war because I'm talking about my dashboard that people are gonna sit on all day. You're talking about this ecommerce site. If you don’t have a response in under 200 milliseconds, people are gonna bounce and you're not gonna make your money. So those are different categories.

    That being said, I really do believe in my heart that there is a future where you can build your big dashboard app and it doesn’t take forever to load if we make the tools really good.

    Jonan: Thank you so much for taking the time to talk to us today. I really appreciate it. Do you have anything else you wanna share? Last minute thoughts?

    Tom: Oh, I just cannot wait to take a vacation in Barbados for a week.

    Jonan: Tom, thank you so much for being here.

    Tom: Thank you, Jonan, and thank you, Terence.

    Terence: Thank you.

  • The History of Ember.js: An Interview With Tom Dale at EmberConf - Part One (Heroku)
    09 May 2017 14:51

    At EmberConf Terence Lee and I had a chance to sit down with Tom Dale and chat about the history of Ember.js and where it’s headed now, including some details on the newly extracted Glimmer.js rendering engine. This post details a lot of the history of Ember, including some of the motivation that led the framework to what it is today. Watch the blog for the second portion of this interview with all of the details on Glimmer.js. The next post will also include the full audio of the interview, with many questions we opted to omit from the transcription to save valuable bytes.

    Jonan: So, we're at EmberConf speaking with Tom Dale, who gave a keynote today with some important announcements. We're going to dig into those in just a minute here, but I’d like you to introduce yourselves please.

    Tom: Sure. Hey, I'm Tom. I just started working at LinkedIn as a senior staff software engineer, and I work on a really awesome team that works on Ember infrastructure. As you may have seen, LinkedIn’s website now is one big Ember application. So my job is to make the army of engineers at LinkedIn productive, and make sure that we're able to build a really awesome web software.

    Terence: I'm Terence, I do language stuff and Rails on the languages team [at Heroku].

    Jonan: There's a third-party Ember buildpack that you worked on, right?

    Terence: Yes. That has no JavaScript in it.

    Jonan: No JavaScript at all? But it ships Ember. I shipped my first Ember app on it.

    Tom: That's not true.

    Terence: It is true.

    Tom: It's all Ruby?

    Terence: Oh, yeah.

    Tom: Awesome. See that's great. You know what, Ember is a big tent, as DHH would say. Not about Ember, he would say that about Rails and then I would copy that because that's basically what we do. We just take what DHH says, and we repeat them in the context of JavaScript, and it sounds very thought leadery.

    Jonan: Would you describe Ember as Omakase?

    Tom: I would describe it as being bespoke, artisanal, shade-grown Omakase.

    Jonan: That's even better. So on the subject of Ember.It's been around for awhile now. How old is Ember? Five years plus?

    Tom: It depends on what date you want to use. So if you're talking about Ember 1.0, I think it's been about five years.

    Terence: Do you include SproutCore in that?

    Tom: I mean I think we should. There is no Ember without SproutCore, and to me SproutCore was one of the first libraries or frameworks to adopt this idea of client-side architecture. So one thing that we talked about in the keynote yesterday was just how much the web has changed in five years, right? So five years ago, IE was the dominant browser but actually, SproutCore had it way worse. And we're talking about IE6 and IE7 and talking about ambitious things, what we do on the web.

    Jonan: And you did it in an era where browsers were not even close to where they are today.

    Tom: Not even close, not even close.

    Jonan: That's interesting. So then, from SproutCore, Ember comes out five years ago and we're off to the races. A lot changed in that first year, you went 1.0 and you’ve said that there were a lot of things that went wrong along the way. In your talk, you had a slide where you mentioned a few of those things. From the 10,000-foot view, what kind of lessons did you learn in those first 5 years?

    Tom: JavaScript apps felt broken and people didn’t know why but people always said, "JavaScript apps feel broken, you know, for whatever reason, please don’t use them" right? And people wanted to shame you for using JavaScript. The reason for that, I think, is URLs. URLs are kind of the linchpin that holds the web together. And so much of the value of the web over native is these URLs, and JavaScript apps just ignored them. SproutCore ignored them, and almost every JavaScript framework did. So, what Ember had to do was figure out how to build JavaScript apps that don’t feel broken on the web. That’s where all this work with the router started.

    Nowadays, routers are taken for granted. Every framework, every library has a router that you can drop into it. But there was actually some novel computer science work that went into it, in how we tie the architecture of the app to the URL. That took a long time and it was a very organic process. I don’t think we fully understood the magnitude of the research project that was going on. There are a lot of examples of that where we tackled problems for the first time, so of course, there's gonna be kind of an organic exploration of that space.

    Another example of this is that when we adopted the six-week release cycle, this train model with Canary Beta and the release, the only other people doing it were Chrome and, I think, Firefox. And when we adopted it, it paid dividends right away, and I'm so happy that we adopted it. One constraint that we have that Chrome and Firefox don’t have as much is that for us, we're always shipping the framework over the wire every time a user visits your webpage, right?

    Jonan: Right.

    Tom: So it's very easy to have feature flags and to keep all the APIs around when you're distributing a binary. It's much harder when every time you do that, your file size goes up, and up, and up. And so what we've had to figure out is okay, "Well, we really liked this release train model. People really like the fact that it's backwards compatible. People really don’t like ecosystem breaking changes like Python 2 to Python 3 or Angular 1 to Angular 2. That doesn’t work so what do we do?"

    You know, you feel kind of stuck. So we've had to figure out a lot of things. Like one thing that we've been working on is something called Project Svelte, which is the ability to say, "You can opt out of deprecated features and we will strip those from the build".

    Jonan: But that's the only way that you can really move forward there. I mean if you've got to make this smaller, you can't just deprecate things arbitrarily.

    Tom: Right.

    Jonan: You can't make those decisions for your user. Your file size is ever growing, which when you're shipping over the wire, is not a great thing.

    This has already, historically, been an issue for Ember, the size of the framework.

    So what you are providing people now is a way to opt out of some those deprecated features. So say that, "All right, I've stopped using this API in my codebase, we can strip this out."

    That's known as Project Svelte?

    Tom: Yeah, that's Project Svelte. It's really important to remember that when Ember started, there were no package managers. NPM wasn’t 1.0 or just hit 1.0, and was not at all designed for frontend packages. It didn’t do any kind of deduplication and distributing.

    This is back in the day when the way that you got a library was you Googled for the website, you found it, they gave you a script tag to just drop in. I'm sure you all agree that's a horrible way to do dependency management.

    So we felt compelled to say, "Well, if we wanna make something… If we want people to actually use something, we have to bake it in." Because when you're gathering all your dependencies by hand, you're only gonna have, you know, four or five of them. You're not gonna go get a million dependencies. Of course, that has changed dramatically and we have new technology like Yarn, which is more like a Cargo/Bundler style of dependency resolution for JavaScript.

    What we found has not worked is trying to do big-design, upfront projects, because anything that we land in Ember gets that guarantee of stability and compatibility.

    People feel a very strong sense of responsibility, that if we land this feature, this has to be something that we are ready to support for the foreseeable future, and that just takes longer. It's the same reason standards bodies move relatively slowly.

    Jonan: Right. Now, this is something you brought up in your keynote. Rather than architecting or spending a huge amount of time and investment upfront architecting your system, you want to get it out in front of the customers as early as possible. But that conflicts with the idea that you're trying to present stable products, things that won't change, right?

    Terence: Stability without stagnation is the tagline right?

    Tom: Right. So that's the message but then we also know that you can't do a big design upfront, and you're not gonna get it perfect the first time. You ship an MVP and iterate.

    So how do you balance this tension? If you look at the projects we've embarked on in the last couple of years, there have been some projects that were more big design upfront. And those have largely stagnated and failed because of the fact that we just couldn’t get consensus on them.

    Then you have some other projects like Ember Engines and FastBoot. What we actually did was look at how web standards bodies work -- CC39, W3C, WHATWG.

    There's something called the "Extensible Web Manifesto," which you may have seen, that says "Hey, standard bodies, open source libraries are able to iterate a lot faster than you are. So instead of focusing on building these big, beautiful, really-easy-to-use declarative APIs, give us the smallest primitive needed to experiment on top of that."

    That’s something that we really take to heart in Ember 2. If you think of Ember as being this small stable core, what we can do is expose just the smallest primitive that you need, and then we can let the experimentation happen in the community.

    So with FastBoot, for example, FastBoot is this entire suite of tools for deploying server-side rendered, client-side apps. You can easily push it to Heroku and, boom, it starts running, but that doesn’t need to live in Ember. We can do all the HTTP stuff, all of the concurrency stuff. All of that can live outside of Ember, all Ember needs to say is, "Give me a URL and I will give you the HTML for that back."

    So that's what we did. There's this API called Visit, the ‘visit’ method. You call it, you give the URL, you get HTML back, and it's so simple and you can easily have discussion about it.

    You can understand how it's gonna operate and that's the thing that we landed. Then that's given us a year to experiment in FastBoot and make a lot of really important changes.

    Jonan: You were able to hide the complexity away behind this simple API.

    Tom: Right.

    Jonan: So some of the things that more recently you mentioned in your keynote as not having gone well, were Ember Pods, for example, and now we have Module Unification. So if I understand correctly, Ember Pods was a way to keep all of your component files related to a single component in one location?

    Tom: Right. The Rails style where you have one directory that's all controllers and one directory that's all views or templates, which is how Ember started. It's still the standard way, the default way you get when you create a new Ember app.

    People found it more productive to say, "I'm gonna have a feature directory", where you have your component and that component might have style. It might have JavaScript, it might have templates. I think it's just easier for people to reason about those when they're all grouped together, instead of bouncing around.

    Jonan: I love this idea. When I first came into Rails, I distinctly remember going from file to file and thinking, "Where even is this thing. How do I find this?"

    So you had said that Ember Pods, maybe, didn’t seem to take off? It wasn't a very popular solution to that problem, and now we have Module Unifications. How is that different?

    Tom: I actually think that Pods was popular, it actually was very popular. So, there's something DHH says: "Beginners and pro users should climb the mountain together."

    I think it's a bad sign, in your framework, if there's the documented happy path that beginners use, and then at some point, they fall off the cliff and see "Oh, actually there's this pro API. It's a little bit harder to use but now that you're in the club, now you get to use it". I think that leads to very bad experiences for both. You kind of wanna have both sets of people going up the same route.

    So Pods is almost this secret opt-in handshake. And it was just one of those things where it started off as an experiment but then slowly became adopted to the point where, I think, we didn’t move fast enough.

    Jonan: I see.

    Tom: We didn’t move fast enough and now, there's almost this bifurcation between apps that are not using Pods and apps that are using Pods.

    So with Module Unification what we did is we sat down and we said "OK, Pods was a really nice improvement but it didn’t have a ton of design applied to it. It was the kind of thing that evolved organically. So let's just sit down and try to design something."

    For us, it was really important with Module Unification to say, "Not only does it need to be good but we need to have a way of being able to automatically migrate 99% of the Ember apps today. We should have a command that will just migrate them to the new file system."

    So one thing that's really neat is that you can just have a component where all you have to do is drag it into another component's directory and now it's scoped. It's almost like a lexical scope in a programming language. We're using the file system to scope which components know about each other.

    Jonan: So, forgive my simplification here but I'm not great at Ember. If I have a login component and it's a box to just log in, and inside of it I wanted to have a Google auth button and a Twitter auth button, each of those could be independent components.

    Maybe I wanna reuse it somehow. I would drag them into my login directory and that makes them scoped, so we can't use them somewhere else.

    Tom: Right. That ends up being pretty nice because often, what you'll do is you'll create a new component, give it a really nice and appropriate semantic name and, oops, it turns out your coworker used that for another page, a year ago. Now, you can't use it, because it’s completely different.

    Jonan: So I've got my Ember app and I've been using Pods all this time, and now, we have Module Unification and there's a new way to do this. I can just move over to module unification right?

    Tom: Yes.

    Jonan: We run this script that you've written and it would migrate me over?

    Tom: Yeah. So we have a migrator and because there's so many Ember apps using the classic system, so many Ember apps using the Pod system, it can handle both.

    Terence: Could Module Unification have happened without Ember Pods happening first?

    Tom: It's hard to say. I think it's something that people really wanted, and I think it's fantastic. This is something we touched on the keynote; one thing that we've always said about Ember, and I think this is true about Rails also, is that there's always a period of experimentation when something new comes along. You really want that experimentation to happen in the community. Then eventually, it seems like one idea has won out in a lot of ways. The things that we learned about with Pods fed directly into Module Unification design.

    Jonan: So maybe, we could chat a little bit about deprecating controllers in Ember?

    Tom: Sure, yeah.

    Jonan: You announced that you were going to deprecate all of the top-level controllers by 2.0, and then pushed 2.1 and 2.2. That's still the plan to deprecate the controllers someday?

    Tom: I think what we are always dedicated to is trying to slim down the programming model and always reevaluate what is the experience like for new people. I don’t want to say that we're going to deprecate controllers because that sounds like a very scary thing, right? There's a lot of people with a lot of controllers in their apps. But I do think what we will want to do is take a look at the Ember programming model from the perspective of a new user. And say, "Well, it seems like people already learned about components. And it seems like there's probably some overlap between what a controller does and what a component does."

    So maybe there's some way we can unify these concepts so people don’t have to learn about this controller thing with its own set of personality quirks.

    Jonan: Is this where routable components fit into the idea then?

    Tom: So that's the idea of routable components and I think I don’t have a concrete plan for exactly how this is going to work. I think a lot of ways, the work that we want to do on that was blocked by the Glimmer component API.

    I think what we'd like to do is add whatever low-level hooks in Ember are needed so that we can maybe do some experimentation around things like routable components outside. Let people get a feel for it and then once we have a design that we're really happy with, then we can land it back in mainland Ember.

    That’s the end of our discussion on the history and direction of the Ember project. Stay tuned for part two and learn more about the Glimmer.js project.

  • Hello RedBeat: A Celery Beat Scheduler (Heroku)
    02 May 2017 15:32

    The Heroku Connect team ran into problems with existing task scheduling libraries. Because of that, we wrote RedBeat, a Celery Beat scheduler that stores scheduled tasks and runtime metadata in Redis. We’ve also open sourced it so others can use it. Here is the story of why and how we created RedBeat.

    Background

    Heroku Connect, makes heavy use of Celery to synchronize data between Salesforce and Heroku Postgres. Over time, our usage has grown, and we came to rely more and more heavily on the Beat scheduler to trigger frequent periodic tasks. For a while, everything was running smoothly, but as we grew cracks started to appear. Beat, the default Celery scheduler, began to behave erratically, with intermittent pauses (yellow in the chart below) and occasionally hanging (red in the chart below). Hangs would require manual intervention, which led to an increased pager burden.

    redbeat-before

    Out of the box, Beat uses a file-based persistent scheduler, which can be problematic in a cloud environment where you can’t guarantee Beat will restart with access to the same filesystem. Of course, there are ways to solve this, but it requires introducing more moving parts to manage a distributed filesystem. An immediate solution is to use your existing SQL database to store the schedule and django-celery, which we were using, allows you to do this easily.

    After digging into the code, we discovered the hangs were due to blocked transactions in the database and the long pauses were caused by periodic saving and reloading of the schedule. We could mitigate this issue by increasing the time between saves, but this also increases the likelihood that we'd lose data. In the end, it was evident that django-celery was a poor fit for this pattern of frequent schedule updates.

    We were already using Redis as our Celery broker, so we decided to investigate moving the schedule into Redis as well. There is an existing celerybeatredis package, but it suffers from the same design issues as django-celery, requiring a pause and full reload to pick up changes.

    So we decided to create a new package, RedBeat, which takes advantage of the inherent strengths of Redis. We’ve been running it in production for over a year and have not seen any recurrences of the problems we were having with the django-celery based scheduler.

    The RedBeat Difference

    How is RedBeat different? The biggest change is that the active schedule is stored in Redis rather than within process space of the Celery Beat daemon.

    No longer does creating or modifying a task require Beat to pause and reload, we just update a key in Redis and Beat will pick up the change on the next tick. A nice side-effect of this is it’s trivial to make updates to the schedule from other languages. As with django-celery, we no longer need to worry about sharing a file across multiple machines to preserve metadata about when tasks were last run. Startup and shutdown times improved since we don't suffer from load spikes caused by having to save and reload the entire schedule from the database. Rather, we have a steady, predictable load on Redis.

    Finally, we added a simple lock that prevents multiple Beat daemons from running concurrently. This can sometimes be a problem for Heroku customers when they scale up from a single worker or during development.

    After converting to RedBeat, we’ve had no scheduler related incidents.

    redbeat-after

    Needless to say, so far we’ve been happy with RedBeat and hope others will find it useful too.

    Why not take it for a spin and let us know what you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>