Back to blog

Our First Demo

2014-06-13 - Posted in Uncategorized Posted by:

by Rinat Abdullin

We finally had our demo last week. As it always happens in practice, nothing went according to the theory.

Unexpected problems

Two big problems surfaced right before the scheduled demo time.

First of all, RAID on one of the production databases (HPC1) suddenly died. This required full attention of Tomas, taking him away from the demo preparations.

Second, I discovered that JavaScript part of chat (which I implemented) gets horribly messed up by subsequent PJAX page jumps. Fortunately, disabling PJAX on chat navigation links solved the problem in the short term. In the longer term, I’ll need to pick up more Javascript skills. Tomas already recommended me to check out Javascript: The Good Parts.

Despite these issues, together with Pieter we cleaned up the HPC2 for the demo. Tomas did an awesome job presenting the product and the vision behind it, which bought us trust from the stake-holders for moving forward. They loved it.

We plan to have demos on a monthly basis from this point.

NoSQL in SQL

During the week we decided to give a try to PostgreSQL, which seems to have a slightly better fit to our needs, than mySQL:

  • great replication story (e.g. “HotStandby and repmgr”);
  • mature drivers in golang (if compared to MySQL);
  • binary protocol that does not suffer from legacy issues like MySQL API does;
  • more polished usage experience (if compared to MySQL);
  • there is a book on PostgreSQL High Performance, which looks as good as the one I read on MySQL.

PostgreSQL also benefits from being one of the widely used databases (although it probably has fewer installs than mySQL).

Replacing MySQL with PostgreSQL was a simple thing, since we use SQL storage mostly for NoSQL purposes anyway.

Using SQL for NoSQL gives us the best of the two worlds: mature ecosystem, polished experience and transactions of SQL along with ease of schema-less development from NoSQL.

By the end of the week I migrated almost the entire application to PostgreSQL. Design decomposition into small and focused packages (with logically isolated storage) really helped to move forward.

Next week I plan to finish the migration and improve test coverage in scenarios that were proven to be tricky during this migration.

So far, PostgreSQL feels more comfortable than MySQL. If this feeling proves to be wrong, we could always jump back or try something else.

Being the worst on errors and panics

Sometime during the week, Pieter brought up the question of using panic vs error in our code. In golang it is idiomatic when functions return a tuple of result and error:

func Sqrt(f float64) (float64, error) {
    if f < 0 {
        return 0, errors.New("math: square root of negative number")
    }
    // implementation
}

You can also issue panic which would stop the ordinary low of control and start going back in the call chain until recover statement is expected or the program crashes.

Since I was burned pretty badly with Exceptions in .NET while working with cloud environments at Lokad (everything is a subject to transient failure at some point, so you have to really design for failure), I tried to avoid ‘panics’ in golang all-together. Instead, almost every function was returning a tuple of result and an error, problems were explicitly bubbled up.

This lead to a lot of unnecessary error checking and some meaningless errors that were pretty hard to trace (since errors in golang do not have a stack trace).

Thankfully Tomas and Pieter patiently explained that it is OK to throw panics even in the scenarios which would later require a proper error handling with a flow control. Initially this felt like a huge meaningless hack, but eventually it all “clicked”.

Refactoring with this new design insight already makes the code more simple and fit the future evolution (which is required by the current stage in a life-cycle of the project).

Becoming a better developer through your IDE

During last weeks I invested bits of time to learn about Emacs and customize it to my needs. One of the awesome discussions with Pieter on this topic helped to realize the importance of such IDE tailoring for personal growth as a developer.

As you probably know, Emacs is almost unusable for development out-of-the-box (vim, even more so). You need to tweak configuration files, pick plugins and wire them together. Most importantly, you need to make dozens of decisions on how you are going to use this contraption for the development.

That’s what I used to hate about Emacs before, thinking that Visual Studio with ReSharper gave me everything that a developer would ever need.

I came to realize that setting up your integrated development environment from the scratch forces you to become more aware about the actual process of development. You start thinking even about such simple things as organization of files in a project and how you are going to navigate between them. Or, how you are going to refactor your project in the absence of solution-wide analysis and renaming provided by ReSharper.

Such troubles affect your everyday coding process, pushing design towards greater decomposition and simplicity. Ultimately, this leads to better understanding.

In the end, Pieter got so inspired by our insights that he also decided to ditch Sublime, giving a try to Vim. We are going to compare our setups and development experiences as we progress through the project. I believe, this is going to lead to even deeper insight for us.

4 Comments

Luke 5 years ago

Hey Rinat,

I’m interested in why you guys are considering PostgreSQL for NoSQL now instead of going back to FoundationDB? Is it still the issue with sequential IDs even though you are using a CRUD approach now rather than event sourcing?

Reply

Rinat Abdullin 5 years ago

Luke, advantage of PostgreSQL over FoundationDB is that it can maintain indexes and execute server-side queries. With FoundationDB we would need to invest time modeling that explicitly. That was something we didn’t want to bother with right now.

Reply

Claes 5 years ago

PostgreSQL is always a good choice, for us it’s performed perfectly even with 200k connections per minute, and we have multiple 100 million plus rows tables. (50% of all queries go to the same 4m row table though).

Btw Rinat, I just listened to the first half of Episode 14 of the Distributed Podcast, very interesting! I’ll definitely listen through it again and read through this blog since we have started to do something similar here at Unikum as you’re already doing. (But from a java monolith to micro services)

Reply

Rinat Abdullin 5 years ago

Claes, thank you for this encouraging feedback! It makes me feel safer about or PostgreSQL choice.

Regarding the Distributed Podcast – we are planning to record another episode some time soon with Jonathan Oliver. Discussed that just the last week.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *