Optimize long Rails rendering time
There is an ideal level of optimization for every project. Optimize too soon, and you can spend precious time on soon-to-be-discarded features. Optimize too late, and the business side suffers.
This article is an example, with real business impact, in implementing basic optimization strategies such as caching. We cut down loading time of some screens in our sales backend by a factor of almost 10, resulting in an improved experience and more customer-facing time for our sales team. While there are many technical articles about optimization, I have found limited resources showing the whole process from investigation to results, so I hope it’ll be useful to others.
// Hi there, I’m the CTO of Wemind, and we’re committed to create a better world for freelancers. We offer a 360° insurance and benefits pack, including health, disability, legal and rent insurances, for freelancers and entrepreneurs.
The problem
In the case of Wemind, our web app has two purposes :
- to provide an efficient CRM and support tool for our salesteam
- to offer our members an easy way to subscribe to, manage and get advantage of the suite of Wemind services.
As far as the salesteam is concerned, I see the role of our tech team as an enabler/accelerator. Automatize processes, eliminate reporting, faciliate customer calls with pricing and note-taking features, and so on. For that purpose, we built a custom CRM, with integrated pricing tools (pricing of insurance contracts is one of the most complicated business logic I’ve encountered :-p).
In the Beta phase, with only a handful of customers, the sales panel was slick and fast. However, since our launch in June 2017, both our propects and customers list have grown extremely fast. The various CRM views showing lists of prospects/users were crawling to a halt, and our salesteam was getting frustrated. It was time to get into optimization mode.
Know your enemy
Before starting working on a optimization problem, it is important to have a handful of metrics. This helps targeting the area with the most potential for improvement, and to track progress.
Client side
The first step of our inspection was to fire up the Chrome network inspector. Here are some of the worst stats — the wide range depends on the view, the server load, and network condition:
- Total time “from click to display” : 4–10s
- Including wait-for-server time: 3–9s
- Download time: 100–800 ms (obviously dependent on connection speed)
- Size: 70–200 Kb
This first result was a good incentive to investigate the server side.
Server side
Opening the server logs gave further details:
- Collection fetching (prospects, clients, etc) : 100 ms on average, 300 ms in the worst cases.
- Erb HTML/JS rendering time : 3–9 s. Almost all of this time is spent on rendering the collection list, the rest of the page being negligible.
Analysis
Before looking at the metrics, we guessed the culprit was a long collection fetching. While it is still a great target for caching, the main target is the rendering time. This shows how important it is to spend a little time investigating, before diving head first in the code :-)
Most of the rendering time was spent on rendering an HTML list, which seems crazy. However, not only the lists can be very long, but also each item requires some computing operation, for example to check its status.
In addition, Heroku metrics showed that the longest requests (10s or more) where during peak load time, when RAM memory exceeds the allocated quota. Above your RAM quota, requests are slowed down.
Strategies
Increasing the available memory on Heroku.
This typically cost 25$/month/dyno, which may or may not be acceptable business-wise depending on your situation.
Results :this reduced the rendering time to a more consistent range of 2–4s. This represents at least a 2x improvement, but the main effect is to prevent the peak loading time of 10s or more, when request are throttled down.
Caching fragments
Here are two great resources about caching : the official Rails guide to start, and this in-depth article.
This a simple way to accelerate the rendering. The first time a user/prospect is rendered, the resulting HTML is stored in a cache store (i.e., a place where the cached requests will be stored and retrieved from). On Heroku, Memcachier is a popular add-on, as well as Redis. Rails makes it incredibly simple to implement, with at most one line of code to add.
Results :
- Individual object caching led to 900ms to 2s (~2x faster than previous step, more than 4x faster than original). In this case, only recently updated people are rendered — all others are loaded. It just requires a line such as
<% cache user do %> <%# render here %> <% end %>
- Collection caching, in which all individual caches are loaded at once, led to loading time of 400 to 900 ms (~2x faster than previous step, almost 10x faster than original). It just requires add “cached: true” such as
<%= render partial: “users/admin/member”, collection: members, cached: true %>
Low-level caching
I initially said that collection fetching was not the main target for optimization, because it is negligible in a 10s request. However, with the request time now below 1s, it is worth revisiting this option.
Initial tests showed that collection fetch time was reduced from 100ms to 20–40ms, and the total request times were reduced to 350–850 ms.
However, there’s a catch. Low-level caching requires a key, to invalidate cache. Rails provides a method cache_key that changes when a product is updated. By analogy, we used a key based on the latest updated_at value of the collection: as long as no User is updated, the collections will be fetched from cache.
members = Rails.cache.fetch(“members_#{User.maximum(:updated_at)}”, expires_in: 1.day) do
# filter to retrieve members
end
However, this would help only for a small fraction of the requests (most requests do actually change one user), and our implementation looked fragile (some tests of our test suite were failing in a non-reproducible fashion). As a result, the gains looked small enough that we chose not to implement this strategy.
Results : we chose not to implement this cache
At this stage, almost half of the remaining time (and most of the variability) were due to download speed.
Additional idea : Aynchronous rendering
Render Async is a great idea to render asynchronously part of the content. The issue is, that the list of prospects is actually the part that needs to be rendered on the page, so this was not going to be helpful
Conclusion
By investigating the root cause of our app slowdown, and by implementing caching at the right place, we have sped up rendering of the slowest screens by a factor of 10. This translated in a better, more useful tool for our salesteam. No fancy algorithms were used, only basic techniques that required very little code. I hope this gives you some inspiration for your next optimization session :-)
Thanks for reading! If you have any feedback, thanks for sharing it in the comments. And if you liked the article, remember to click the green ❤ below so that others can discover it ;)