What's new in Django community blogs?


Sep 30 2016 [Archived Version] □ Published at Latest Django packages added

Dynamically select only a subset of fields per DRF resource.

Store Measurements Sanely with django-measurement

Sep 30 2016 [Archived Version] □ Published at GoDjango Screencasts and Tutorials

Storing measurement data for multiple types of units is quite a frustrating task. Fortunately with django-measurement providing a good wrapper for python-measurement it makes this almost a non-issue. In this video get a quick overview of how to use django-measurement to store that pesky data.
Watch Now...


Sep 30 2016 [Archived Version] □ Published at Latest Django packages added

A django app to apply filters on drf querysets using query params with validations using voluptuous.

Control a Star Wars BB-8 Droid With Arm Gestures and IBM Bluemix Internet of Things

Sep 29 2016 [Archived Version] □ Published at tuts+

What You'll Be Creating

Welcome! In our prior tutorial, Control a Droid With Your Mind and IBM Bluemix Internet of Things, we covered Joshua Carr's use of the Emotiv Insight headset to control a Star Wars BB-8 droid with his thoughts. If you don't believe me, check it out or watch the video below.

It's made possible with some amazing consumer hardware and IBM Bluemix integration with the Internet of Things.

In today's tutorial, I'll guide you through my interview with Consulting IT Specialist Rob Peeren. He created the featured video at the top, showing how he used an armband and arm gestures to control BB-8 with enough accuracy to play soccer (or concussion-free football as some call it).

We're also likely to continue this series on IBM Bluemix and the Internet of Things (IoT) with specific step-by-step tutorials about how to try out more of your own projects. Please let us know which types of IoT topics you'd like to see more written about in the future. 

If you'd like a broader overview of IBM Bluemix, I encourage you to watch David Barnes's introduction below:

As always, share your ideas and feedback in the comments below or reach me directly on Twitter @reifman. You can also reach out to Rob Peeren @robobob or via @IBMCloud.

Armband Controller Components

Here are the elements of Peeren's armband demonstration:

IBM Bluemix IoT Arm Gestures - The Components Used by Rob Peeren for Tutorial Today

You can buy the Myo armband in black or white for $199 USD:

IBM Bluemix IoT Arm Gestures - Myo Gesture Control Armband

Here are a couple of introductory videos of the Myo Armband which are fun to watch, especially if you haven't seen it before. 

Here's the general product introduction:

And this one is targeted more at developers:

They offer a variety of solutions for usage, as well as an excellent Developer site.

And of course, here's BB-8 again and how it came to be:

IBM Bluemix IoT Arm Gestures - Retail Box of Star Wars BB-8 Droid by Sphero

Building the Application

IBM Bluemix IoT Arm Gestures - View of BB-8 Darth Vader Golfball Raspberry Pi and Myo Armband

Now, let's dive in to how Peeren built the demonstration using IBM Bluemix Internet of Things. In today's episode, I'll be giving a general overview from my interview with Peeren. It's possible we'll do a step by step together in the near future—let us know us in the comments below if you'd be interested in this!

Here's a screenshot showing how Bluemix works with devices and the IoT:

IBM Bluemix IoT Arm Gestures - Hot it all fits together intro to Bluemix IoT

Here is a high-level architectural image of what's happening between Bluemix and each of the Raspberry Pis in Peeren's video (learn more about MQTT here):

IBM Bluemix IoT Arm Gestures - Device and Bluemix IoT Flowmap with MQTT

Setting Up a Bluemix Application

Since I'm not stepping you through the application setup, you may be interested in a tour of the Bluemix application UX given by IBM Design Lead, Tarun Gangwani:

Basically, you can create an application from any of the Bluemix boilerplates, including the Internet of Things Platform Starter.

IBM Bluemix IoT Arm Gestures - Boilerplates menu

Here is Peeren's Internet of Things Dashboard, which includes the SDK for NodeJS and the Internet of Things Platform which he uses to receive data from the Myo armband and send it to the BB-8:

IBM Bluemix IoT Arm Gestures - Bluemix Dashboard with IoT123 Demo App

Calibrating the Armband to Your Movements

Peeren recommends that you practice with the Myo armband after calibrating it. Here's a video from Creating a custom calibration of your Myo Armband which shows how this works (see also What can the Myo armband actually do):

Basically, you calibrate it with a few simple gestures and then practice your movements so that it can pick up your intentions. Peeren used the following gestures for the video:

  • Waving in to turn left
  • Waving out to turn right
  • Fist to make it stop
  • Spreading fingers to make it go
  • Raising your arm to spin 180 (Tony Hawk would be proud and then say, "Do it in mid-air Droid!")

Just as it took Carr hours to train the Emotiv Insight, Peeren says it takes practice to work with the Myo. Control systems aren't completely automatic yet—you can't just put on the helmet and accurately fly the helicopter (sorry to bum out you action movie writers.)

Sending Armband Telemetry to Bluemix

As I mentioned above, the blue Raspberry Pi receives input from the armband and sends it to the Bluemix cloud. It does this by running Python code and MQTT to communicate with Bluemix.

Essentially, the Myo Armband sends telemetry via Bluetooth to the Bluetooth adapter on the Raspberry Pi. Then, the Python code takes the telemetry and sends it to Bluemix in the cloud. All the data comes in as a JSON payload.

Here's a screenshot of Peeren's Python code:

IBM Bluemix IoT Arm Gestures - Python Code from the Demo

If you want to delve deeper in this area, I recommend checking out the Thalmic Labs Developer portal and its unofficial library page. You might also check out this related video of a Myo armband directing a Raspberry Pi wheeled robot (it has a great soundtrack):

Processing the Armband Data Within Bluemix

Within Bluemix, the data is transformable using the Node-RED visual editor. We need to convert the incoming Myo gestures into commands the BB-8 understands in its driver/language.

For example, armband up is translated to BB-8: start and stop. Here's a screenshot from Peeren's Node-RED translation flow:

IBM Bluemix IoT Arm Gestures - NodeRED Visual Wiring Editor

Delivering Commands to BB-8

To get commands from Bluemix to the robot, they are sent from Bluemix via the Internet to NodeJS and the Cylon.js SDK on the silver Raspberry Pi. The silver Pi sends commands via Bluetooth to the BB-8 droid.

Cylon.js is a JavaScript framework for robotics, physical computing, and the Internet of Things. It makes it incredibly easy to command robots and devices. There's also a specific Cylon.js SDK for Sphero's BB-8. See also the Cylon.js driver on GitHub.

Here's some of Peeren's Cylon code connecting from Bluemix via MQTT to BB8 via cylon-sphero-ble:

IBM Bluemix IoT Arm Gestures - Cylonjs JavaScript code from the Demo

Once the commands are received by the BB-8, its internal systems activate each command creating the droid's motion and lighting effects.

Key Takeaways

I asked Peeren what was hardest about this effort, and he replied, “It was actually fairly straightforward.” He said he's just connecting a bunch of APIs. 

“I didn’t build anything here. I didn’t write any device drivers or lower level code. I’m using the APIs and connecting in a standard way to Bluemix via the MQTT protocol. I didn’t have to invent anything to make it work."

Peeren wants to inspire us to think about using Bluemix in bigger ways. Certainly, I'm inspired by everything Bluemix can do with the Emotiv Insight headset, the Myo Armband and Raspberry Pi hardware. It's incredible how far the industry has come.

As a teenager, I experimented with voice signal capture and dreamed of recognition. In college, I experimented with pen tablet and how handwriting recognition might work (demonstrating this late one afternoon to Nathan Myhrvold helped me land my first job at Microsoft.) But now most of this is possible with Bluemix and the Internet of Things.

Peeren says, "The basic plumbing is very simple." And Bluemix offers over 120 enterprise-ready services; "It’s not just about armbands and robots." Or microphones and voice recognition—it's much more.

He suggests experimenting with the Bluemix analytics engine to raise the intelligence of the interpretation of gestures or its visual recognition capabilities.

When you build your own application in Bluemix, you have everything in one place. You don't have to go to different platforms. One place for business rules, a reporting rules engine, Hadoop, etc. The possibilities are nearly endless.

What's Next?

I hope you've enjoyed both our IBM Bluemix Internet of Things video demonstrations and tutorials. Perhaps you'll feel inspired to try building your own demonstration.

Peeren mentioned to me that the best part about Bluemix IoT is that nothing is ever very complicated. He's able to accomplish his goals by combining the building blocks which Bluemix and third-party providers offer.

IBM also offers a range of training and certification for Bluemix through its developerWorks sites. Here are some related resources:

If you'd like to see more on Bluemix and IoT, please let us know—post in the comments or reach us on Twitter @reifman or Rob Peeren @robobob or via @IBMCloud. If you build a cool IoT device application, let us know and perhaps we'll write a feature about yours!

You can look for future tutorials of all kinds from me on my Envato Tuts+ instructor page. I hope you'll also check out my two series How to Program With Yii2 and Building Your Startup With PHP about building Meeting Planner.

IBM Bluemix IoT Arm Gestures - IBM Logo and Bluemix Link

If you wish to learn more about IBM Bluemix and Internet of Things, visit http://ibm.com/bluemix.

Related Links

Django Girls Pune 2016

Sep 29 2016 [Archived Version] □ Published at Django Girls Blog

This blog post was written by Rohit Jaiswal. Thank you Rohit!❤️

Yay !!!

We had successfully completed the 2nd Django Girls Event in the Pune City.  This year we had almost double applications than last year so making it tough for our us to select the appropriate participants. So, yeah we were able to draw many attentions to Django Girls initiative. I feel blessed and proud to be associated with it as an organizer.


Total people turned out 37 out of 49(including mentors) because of heavy rains on the event day. People who turned up were so full of zeal that we never felt that number were somewhat less. Mentors did very well since they had already explained pre-event things to mentees obviously under my guidance ;) ;). We learn together, we practiced together, we experimented together and all together it was a fruitful and joyful event.

Special thanks to Red Hat Pune for providing premise, resources, food, snacks and overall awesome arrangements.

Our sincere thanks to Git Hub for providing financial assistance so we could cover event expense.

At last but not least our sincere thanks to Lucie Daeye for helping at each step.

Thank you Django Girls for giving us the opportunity to host the event in Pune City. Hope to host many such events in future.


Rohit Jaiswal


Sep 28 2016 [Archived Version] □ Published at Latest Django packages added

Django Instant

Sep 27 2016 [Archived Version] □ Published at Latest Django packages added

Websockets for Django with Centrifugo

Acing Your Architecture Interview

Sep 27 2016 [Archived Version] □ Published at Irrational Exuberance

My most memorable interview started with an entirely unexpected question: “How well do you remember calculus?” I smiled and said that it had been some years, and that I was rather out of practice. Nonetheless we spent the next hour trying to do calculus, which I bombed spectacularly.

Many people have similar misgivings about “architecture interviews”, which are one of my favorite interviews for experienced candidates from within the internet industry, and I decided to write up my algorithm for approaching these questions.

Before jumping in, a few examples of typical architecture interview questions:

  • Design the architecture for Facebook, Twitter, Uber or Foursquare.
  • Diagram a basic web application of your choice.
  • Design a reliable website similar to a newspaper’s website.
  • Design a scalable API to power a mobile game.

From that starting point, the interviewer will either give you more constraints to solve for (“your database starts to get overloaded”), or ask you to think of ways that your system will run into scaling problems and how to address them (“well, first off, I think we’d run out of workers to process incoming requests”).

The basic algorithm to this class of interviews is:

  1. Diagram your current design.
  2. Apply a new constraint to your design.
  3. Determine the bottleneck created by that constraint.
  4. Update your design to address the bottleneck.

That’s probably not quite enough to help you ace your next architecture interview, so the rest of this post will go through a brief primer on drawing architectural diagrams, and then dive into the bottlenecks created by specific constraints.


The most important thing to know about technical diagramming is that it’s much more of an art than a science. As long as you consistently do something reasonable, you’ll be fine.

During an interview, you’ll almost always be writing on a whiteboard, but if you do ever find yourself diagramming on a computer, I’m a huge fan of Omnigraffle. Even thought it is not cheap I think it’s a worthwhile investment (I use it for wireframing in addition to diagramming).

Let’s do a few examples.

On the left a server running a webserver and a database, and on the right two servers: one running a webserver and the other a database.

Diagramming a server with two processes and two servers with one process.

There are very few conventions, but generally servers are boxes, processes are boxes, and databases or other storage mechanisms are cylinders. It’s totally fine to just use boxes for everything though. Servers are generally named with server kind combined with a unique number, for example frontend01 and frontend02 for the first and second servers of the kind frontend. Numbering is helpful as you add more servers to your example. Representing 1 as 01 is extremely arbitrary, but is a bit of a “smell” implying that you’ve named servers before for server groups with more than nine servers (you can absolutely compose a number of very reasonable arguements that naming servers this way is a bad idea, it’s more of a common practice than a best practice).

Next, let’s show two datacenters, taking traffic from the mythical cloud that represents the internet.

Diagram of internet sending traffic to two datacenters.

The internet is, by odd convention, always represented as a cloud. If you had a mobile device connecting to your app, generally you would show the mobile device, draw a line to the internet, and then continue as above (the same is equally true for a website).

Datacenters or regions are physically colocatedgroups of servers, and are generally depicted as a large box around a number of servers (exactly like processes within a server as depicted as boxes within the server, and generally it’s not unreasonable to think that a datacenter is to a server as a server is to a process).

As seen between the fe and app tiers of servers, I often leave off lines when it makes the diagram too messy. Personally, I think diagrams should prefer to be effective instead of accurate, if the two concepts come into tension.

To summarize: lots of boxes, a few lines, and as simple as possible. Now, onward to problem solving!

Diagram, Constrain, Solve, Repeat

Now we’re getting into the thick of things. For each subsection, we’ll show a starting diagram, add a contraint to it, diagnose what the constraint implies, and then create an updated diagram solving for the constraint. There are many, many possible constraints, so we won’t be able to cover all of them, but hopefully we’ll be able to look at enough of them to give an idea.

Web Server Is Overloaded

You’re running your blog on a single server, which is a simple Python web application running on MySQL.

All of the sudden, your server becomes “overloaded”, what do you do?! Well, the first thing is to figure out what that even means. Here are a handful of things that might mean (in approximate order of likeliness):

  1. you don’t have enough workers to handle concurrent load,
  2. not enough memory to run its workload,
  3. not enough CPU to run its workload quickly,
  4. disk are running out of IOPs,
  5. not enough file descriptors,
  6. not enough network bandwidth.

You should ask the interviewer which of these issues is causing the overload, but if they ask you to predict which is happening, it’s probably one of the above. Let’s think about both vertical and horizontal scaling strategies for each of those (vertical scaling is increasing your throughput on a given set of hardware, perhaps by adding more memory or faster CPUs, and horizontal scaling is about adding more servers).

Many, probably most, servers use threads for concurrency. This means if you have ten threads you can process ten concurrent requests. Following, if your average request takes 250 ms, your maximum throughput on ten threads is 40 requests per second. By far the most common first scaling problem for a server is that you have too many incoming requests for your current workload.

The simplest solution is to add more worker threads, with server02 having many more threads than server01.

The limitation is that every additional thread consumes more memory and more CPU. At some point, varying greatly depending on your specific process, you simply won’t be able to add more threads due to memory pressure. It’s also possible that your server will have some kind of shared resources across threads, such that it gets slower for each additional thread, even if it isn’t becoming memory or cpu constrained.

If you become memory constrained before becoming CPU constrained, then another great option is to use a nonblocking webserver which relies on asynchronous IO. The nonblocking implemention decouples your server from having one thread per concurrent request, meaning you could potentially run many, many concurrent requests on a single server (perhaps, 10,000 concurrent requests!).

Once you’ve exhausted your above options to process more requests on your given server, then the next steps are likely to break apart your single server into several specialized servers (likely a server role of servers and a database or db role of servers), and put the web servers behind a load balancer.

The load balancer will balance the incoming requests across your webservers, each of which will have its own threads that can process concurrent requests. (Load balancers rely on non-blocking IO, and are heavily optimized, so their concurrency limits are usually not a limiting factor.) You’re not setup so you can more-or-less indefinitely add more and more server nodes to increase your concurrency. (Typically you’ll break your database at this point, which is continued below.)

Scaling horizontally via load balancer is the horizontal solution to all host-level problems, if you are running a stateless service. (A service is stateless if any instance of your service can handle any request; if you need the same server to continue handling all requests for a given page or user, it is not stateless).

If you remember one thing, it’s using a load balancer to scale horizontally and skip forward to the databases section, but we’ll also dig a bit into vertically scaling the other common scaling challenges for a single server.

The next, most common scenario is running out of memory (perhaps due to too many threads). In terms of vertically scaling, you could add more RAM to your server or spend some time investing into reducing the memory usage.

If your data is too large to fit into a given server and all the data needs to fit into memory at once, then you can shard the data such that each server has a subset of data, and update your load balancer to direct traffic for a given shard to the correct servers.

We’ll look at this category of solution a bit more in the section on databases below, but the other option is to load less of your data into memory and instead to store most of it on disk. This means that reading or writing the data will be slower, but that you’ll have significantly more storage capacity (1TB SSDs are common, but it’s still uncommon for a web server to have over 128GB of RAM).

For read heavy workloads, a compromise between having everything on disk and everything in memory is to load the “hot set” (the portion which is frequently accessed) of data into memory and keep everything else on disk (a newspaper might keep today’s articles in memory but keep their historical archive on disk for the occasional reader interested in older news).

Running out of CPU is very similar to running out of RAM, with a couple of interesting aspects.

A server with major CPU contention keeps working in a degraded way whereas a server with insufficient memory tends to fail very explicitly (software tends to crash when it asks for more memory and doesn’t get it, and Linux has the not-entirely-friendly OOM killer which will automatically kill processes using too much memory).

In many cases where you’re doing legitimate work that is using too much CPU, you’ll be able to rely on a library in a more CPU efficient language (like Python or Ruby using a C extension, simplejson is a great example of this that you might already be using). It’s not uncommon for production deployments to spend 20-40% of their CPU time serializing and deserializing JSON, and simply switching the used JSON library can give a huge boost!

In cases where you’re already using performant libraries, you can rewrite portions of your code in more performant languages like Go, C++ or Java, but the decision to rewrite working software is fraught with disaster.

In a workload where you’re writing a bunch of data (or even just an obscene amount of logs) or reading a bunch of data from disk (perhaps because you have too much to fit into memory), then you’ll often run out of IOPS. IOPS are the number of reads or writes your disk can do per second.

You can rewrite your software to read or write to disk less frequently through caching in memory or by buffering writes in memory and then periodically flushing them to disk.

Alternatively, you can buy more or different disks. Disks are interesting because the kind of disk you use matters in a very significant way: spinning disks (SATA is a kind of larger, slower disks, and SAS are a kind of faster, smaller disks) max out below 200 IOPS, but low end SSDs can do 5,000 IOPS and the best can do over 100,000 IOPS. (There are even higher end devices like FusionIO which can perform far more at great cost.)

It’s pretty common to end up with your web servers running cheap SATA drives and to have your database machines running more expensive SSDs.

For workloads with high concurrency, for example your non-blocking service you used to solve being limited by the number of threads on your server, an exciting problem you will eventually run into is running out of file descriptors.

Almost everyone has a great file descriptor related disaster story, but the good news is that it’s just a value you need to change in your operating systems configuration, and then you can go back to your day a bit wiser. (There is a storied history of operating systems setting bewilderingly low default values for file descriptors, but on the average they are setting increasingly reasonable defaults these days, so you may never run into this.)

Finally, if your application is very bandwidth intensive–perhaps your hypothetical application does streaming video–then at some point you’ll start running out of bandwidth on your network interface controller or NIC. Most servers have 1GBs NICs these days, but it’s pretty cost effective to upgrade to 10GBs NICs if you need them, which most workloads don’t.

Alright, that concludes our options for scaling out our single web server! In addition to the numerous strategies for vertically scaling that server out to give us a bit more work, load balancers are really by far our best tool.

Next let’s take a look at the scenario where your servers aren’t overloaded at all, but are responding very slowly.

Application is Slow

For this section, let’s say you’re running an API which takes a URL and returns a well formatted, easy to read version of the page (something like this). Your initial setup is simple, with a load balancer, some web servers and a database.

All of the sudden, your users start to complain that your application has gotten slow. The humans can perceive latency over 100 to 200ms, so ideally you’d want your pages to load in less than 200ms, and for some reason your application is now taking 1.5 seconds to load.

How can you fix this?

Your first step for every performance problem is profiling. In many cases, profiling won’t make it immediately obvious what you need to fix, but it will almost always give you a bread crumb. You follow that bread crumb until you profile another component which gives you another bread crumb, and you keep chasing.

In this case, it’s pretty likely that the first issue is that reading from your database has gotten slow. In a case where users frequently call your API on the same URLs, then a great next step would be adding a cache like Redis or Memcached.

Now for all API calls you’ll check:

  1. to see if the data is in your cache and use the data there if it exists,
  2. if the data is in your database, if it is, you’ll write that data to cache and then return it to the user,
  3. otherwise you’ll crawl the website, then write it to the database, then to the cache, then return it to the user.

As long as users keep crawling mostly the same URLs, your API should be fast again.

Another common scenario is for your application is getting slow because it’s simply doing too much! Perhaps every incoming request is retrieving some data from your cache, but it’s also writing a bunch of analytics into your database, and writing some logs to disk. You profile your server, and find that it’s spending the majority of its time writing the analytics data.

A great way to solve this is to run the slow code somewhere where it won’t impact the user. In some languages this is as simple as returning the response to the user and doing the analytics writes afterward, but many frameworks make that surprisingly tricky.

In those cases, a message queue and asynchronous workers are your friend. (Some example message queues are RabbitMQ and Amazon Simple Queuing Service.)

(Note that I’ve gone ahead and replaced the named servers with boxes which represent many servers, many cache boxes, and so on. I did this to reduce the overall complexity of the diagram, trying to keep it simple. Diagram refactoring is often useful! A hybrid approach is to have a box of servers and then to show multiple servers within that box, but only draw lines from the containing box, allowing you to more explicitly indicate many servers in a given role without having to draw quite so many overlapping lines.)

Now when a request comes to your servers, you do most of the work synchronously, but then schedule a task in your message queue (which tends to be very, very quick, depending on what backend you’re using it’ll vary, but very likely <50ms), which will allow your worker processes to dequeue the task and process it later, without your user having to wait for it to finish (in other words, asynchronously).

Finally, sometimes latency is not about your application at all, but instead is because your user is far away from your server. Crossing from the United States’ east coast to the west will take about 70ms, and a bit more to go from the east coast to Europe, so traveling from the west coast to Europe can easily be over 160ms.

Typically the most practical way to address this is to use a CDN, which behaves like a geographically distributed cache, allowing common requests to get served from local caches which are optimistically only 10 or 20ms away from the user instead of traveling across the globe. (AWS Cloud Front, Fastly, EdgeCast and Akamai are some popular examples of CDNs.)

Larger companies or those with very special needs–typically with high bandwidth needs like video streaming–sometimes create points of presence or POPs which act like CDNs but are much more customizable, but it’s increasingly uncommon as more and more companies start and stay in Amazon, Google or Microsoft clouds.

To repeat slightly, CDNs are typically only useful for situations where your content is either static or can be periodically regenerated, which is the case for a newspaper website for logged out users, and where some subset of your content is popular enough that multiple people want to see the same content in a given minute or so (acknowledging that a minute is an arbitrary length of time, you can cache content however long you want). (This admittedly ignores the latency advantages of SSL termination, which this article covers nicely.)

If you need to reduce latency to your users and your workload doesn’t make using a CDN effective, then you’ll have to run multiple datacenters, each of which has either all of your data or has all the necessary data for users who are routed to it.

(If you happen to be wondering or may be asked what mechanism picks the right datacenter for you, generally that is Anycast DNS, which is very nifty and definitely worth some bonus points if you can relevantly mention it in an interview.)

Running multiple datacenters tends to be fairly complex depending on how you do it, and as a topic is beyond the scope of something which can be covered in this article. A relatively small percentage of companies run multiple datacenters (or run in multiple regions for the cloud equivalent), and almost all of them do something fairly custom.

That said, designing a system which supports multiple datacenters ends up being very similar to scaling out your database, which happens to be the next section.

Database Is Overloaded

Let’s say that you’re implementing a simple version of Reddit, with user submissions and comments.

All is good for a while, but you start getting too many reads and your database becomes CPU bound. If you’re using a SQL database like MySQL or PostgreSQL, the simplest fix is to add replicas which have all the same data as the primary but which only support reads (nomenclature here varies a bit, with primary and secondary or primary and replica becoming more common, older documentation and systems tend to refer to these as master and slave, but that usage–despite being the default and dominant for many years and not uncommon to hear today–is steadily going out of favor).

Your write load must still be applied across every server (primary or replica), so replicas do not give you any additional write bandwidth, but every read can be done against any server (within the constraints of replication lag, which only matters if you are trying to read data immediately after you write it, in which case you may have to force “read after write” operations to read from the primary database, but you can still allow all other reads to go to any of the replicas), so you can horizontally scale out reads for a long time (at some point you’ll eventually become constrainted on adding more replics on the available bandwidth, but many databases allow you to chain replication from one replica to another replica, which will–in theory anyway–allow you to scale out more or less indefinitely).

If you’re overloaded with writes, your solution will end up being a bit different. Rather than replication, instead you’ll need to rely sharding: splitting your data so that each database server only contains a subset, and then having a mechanism (often just some code in your application) which correctly picks the right shard for a give operation.

Because this allows each server to only do a subset of the writes, you can scale out horizontally to handle more or less any write load. In practice people dislike sharding because you can no longer perform operations against all of your data (for example, to count all of the submitted posts), instead you’ll have to perform the operation against each shard independently and figure out how to combine the results yourself.

If your servers are running out of IOPS or disk space, sharding is also the best solution. In a certain sense, sharding is load balancing for stateful services.

Sharding is also complex to maintain operationally, you’ll spend a lot of time maintaining the mechanisms which route to the correct shard, and even more time figuring out how to rebalance your shards to prevent one shard from becoming so large that it can’t handle all of the writes routed to it.

If you have heavy reads and heavy writes, then you absolutey can combine replication and sharding as shown above.

Although replication and sharding are two of the most common scaling solutions for an architecture interview, there are a few others worth mentioning:

Batching is useful when you have too many writes for your database, but for whatever reason you don’t want to start sharding your database yet. Especially for usecases like analytics which are tracking the number of pageviews for a given page over a minute, batching could allow you to go from 10,000s of writes per second to one write a minute (although you’ll have to store the batched data somewhere, perhaps in a message queue or even justin your application’s memory).

Finally, the last strategy I’ll mention here is to use a different kind of database, in particular NoSQL databases like Cassandra are designed to spread writes and reads across many servers by default, as opposed to having to implementing your own custom sharding tools. The typical downside to NoSQL servers is that they have a reduced set of operations you can perform efficiently because each shard only has a subset of data.

If you want to get deeper on NoSQL, my advice would be to read the fairly approachable Dynamo Paper which will give you the concepts and vocabulary to have a thoughtful conversation about the scenarios where a NoSQL database might be appropriate.

Ending Thoughts

This ended up being a bit longer than expected, and in particular what really came to me while writing it is what an odd kind of interview this is. Essentially it is a test to see if people can create the impression that they’ve built large or complex systems before.

Oddness aside, this interview format does a great job of recreating the real experience of codesigning a system together or explaining how an existing system works to a new teammate. As long as we use it with candidates with an appropriate background–namely, some years of experience at an internet company or a company using the same technologies as an internet company–then I think it’s a useful tool in your kit.

If not this format, how do you get a sense of someone’s system design experience?


Sep 27 2016 [Archived Version] □ Published at Latest Django packages added

Pluggable map widget for Django Postgis point fields


Sep 27 2016 [Archived Version] □ Published at Latest Django packages added

Django models for HasOffers


Sep 26 2016 [Archived Version] □ Published at Latest Django packages added

Django simple ticketing system

Django security releases issued: 1.9.10 and 1.8.15

Sep 26 2016 [Archived Version] □ Published at The Django weblog

In accordance with our security release policy, the Django team is issuing Django 1.9.10 and 1.8.15. These release addresses a security issue detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2016-7401: CSRF protection bypass on a site with Google Analytics

An interaction between Google Analytics and Django's cookie parsing could allow an attacker to set arbitrary cookies leading to a bypass of CSRF protection.

Thanks Sergey Bobrov for reporting the issue.

Affected supported versions

  • Django 1.9
  • Django 1.8

Django 1.10 and the master development branch are not affected.

Per our supported versions policy, Django 1.7 and older are no longer receiving security updates.


Patches to resolve the issue have been applied to Django's 1.9 and 1.8 release branches. The patches may be obtained from the following changesets:

The following new releases have been issued:

The PGP key ID used for these releases is Tim Graham: 1E8ABDC773EDE252.

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance or the django-developers list. Please see our security policies for further information.


Sep 26 2016 [Archived Version] □ Published at Latest Django packages added

httpstat: Simple cURL Stats

Sep 26 2016 [Archived Version] □ Published at David Walsh Blog under tags  python shell

There are a lot of tools out there that do great, advanced things but present them as well as they could be presented.  I wont knock cURL for anything — it’s an amazing tool many of us can’t live without;  what I will say, however, is that it’s nice having tools on top of cURL for […]

The post httpstat: Simple cURL Stats appeared first on David Walsh Blog.


Sep 21 2016 [Archived Version] □ Published at Latest Django packages added

django-planet aggregates posts from Django-related blogs. It is not affiliated with or endorsed by the Django Project.

Social Sharing


Tag cloud

admin administration adsense advanced ajax amazon angular angularjs apache api app appengine app engine apple application security aprendiendo python architecture articles asides audrey authentication automation backup bash basics best practices binary bitbucket blog blog action day blogging book books buildout business c++ cache capoeira celery celerycam celerycrawler challenges chat cherokee choices class-based-views cliff cloud cms code codeship codeship news coding command community computer computers computing configuration continuous deployment continuous integration couchdb coverage css custom d data database databases db debian debugging deploy deployment deployment academy design developers development devops digitalocean django django1.7 django cms djangocon django framework django-nose django-readonly-site django-rest-framework django-tagging django templates django-twisted-chat django web framework tutorials documentation dojango dojo dotcloud dreamhost dughh easy_install eclipse education elasticsearch email encoding english error events expressjs extensions fabric facebook family fashiolista fedora field file filter fix flash flask form forms frameworks friends fun functional reactive programming gae gallery games geek general gentoo gis git github gmail gnome google google app engine guides gunicorn hack hackathon hacking hamburg haskell heroku holidays hosting howto how-to how-tos html http i18n image imaging indifex install installation intermediate internet ios iphone java javascript jobs journalism jquery json justmigrated kde linear regression linkedin linode linux login mac machine learning mac os x math memcached mercurial meta migration mirror misc model models mod_wsgi mongodb months mozilla multi-language mvc mysql nelenschuurmans newforms news nginx nodejs nosql ogólne openshift opensource open source open-source operations orm osx os x ottawa paas packages patterns pedantics pelican performance personal philosophy php pi pil pinax pip piston planet plone plugin pony postgis postgres postgresql ppoftw presentation private programmieren programming programming &amp; internet project projects pycharm pycon pygrunn pyladies pypi pypy python python3 queryset quick tips quora rabbitmq rails rant ratnadeep debnath reactjs redis refactor release request resolutions rest reusable app review rhel rtnpro ruby ruby on rails scala scaling science screencast script scripting security server setup shell simple smiley snaking software software collections software development south sphinx sql ssh ssl static storage supervisor support svn sysadmin tag tag cloud talk nerdy to me tastypie tdd techblog technical technology template templates template tags test testing tests tip tips tools tornado transifex travel tumbles tutorial tutorials twisted twitter twoscoops typo3 ubuntu uncategorized unicode unittest unix use user authentication usergroup uwsgi uxebu virtualenv virtualenvwrapper web web 2.0 web application web applications web design &amp; development webdev web development webfaction web framework websockets whoosh windows wordpress work workshop yada znc zope