Blog

Big Data Part II: AddThis Today

This is the second in a three part series on the processing and infrastructure of AddThis. Make sure to check out part one and part three.

Fast forward to today. Our data infrastructure supports AddThis code deployments on more than 14 million domains, ingesting billions of events and terabytes of logs daily.  Scores of machines has grown to around a thousand totalling tens of terabytes of RAM and petabytes of disk space.

The majority of our internal compute capacity is dedicated to our custom data processing platform.  This platform is highly dependent on large IO capacity and low latency more than CPU cycles.  Virtualization, generally speaking, can be manageable for many CPU and memory-bound tasks, but is almost always destructive to the performance of network and disk subsystems.  As a result, we employ no virtualization save for a few test machines.  And while we run relatively commodity network gear, our decisions around topology and infrastructure have been made deliberately and carefully to maximize performance.  In our experience, the compounding effects of latency and IO bottlenecks quickly become a non-linear problem.

  • even with the best of intentions, virtualization kills performance
  • compress everything, keep it compressed even in RAM until it’s needed
  • all data must stream (compressed!), remote procedure calls are evil
  • data must be sharded for locality of processing, not physical locality (see previous)
  • model around simple tasks that can checkpoint and replicate
  • anything that provides transactional safety between checkpoints works against you
  • design for (constant) failure whether in a public or private cloud

AddThis employs top-tier CDNs for most static web-asset serving, but we have still chosen to internalize our http-based APIs.  Even though cloud offerings excel at many web-service scenarios, they mainly rely on traditional LAMP stacks with limited back-end commitments beyond scripting and modest databases.  In these environments, virtualization is usually OK because most instances vastly under-utilize system resources. But when contention rears its head in the cloud, there is no recourse other than take the penalty or provision random new hardware.

Our “edge” and eventing API are http services that require maniacal tuning for performance and reliability.  With billions of requests a day spread across millions of domains, our code is trusted by site owners to respond quickly.  Latency *is* supreme.  Flowing from our edge and eventing APIs to the backend are terabytes of real-time data.  This live stream is forked into log storage and real-time systems.  Proximity to our compute clusters enables both faster and more economical processing of data by avoiding repeating data across high-cost, high-latency links to remote data centers.

  • commodity network gear with the right topology and software can work wonders
  • optimization of networks, proxies and kernels remains a high art
  • chrome does unspeakable things (for another day)

AddThis’ compute clusters are constantly processing and re-processing hundreds of terabytes of data, applying new algorithms, generating new metrics and finding deeper insights.  Our current model supports complete visibility into optimizing our stack.  With a top to bottom view on hardware and software, we can squeeze out more than 10 times the performance per dollar than we can get from cloud-based offerings.  Equally important, we can do this reliably and predictably.

Continue to Part III: A Simple Decision – Performance, Predictability, Price

  • Mxx

    If you skipped the cloud, how can you say that your physical infra can “squeeze out more than 10 times the performance per dollar than we can get from cloud-based offerings.”
    How do you know that? Based on what metrics, numbers, benchmarks and providers?
    You are talking in extremely generic terms and clumping 100s if not 1000s of very different cloud providers together.