Currently, all backend systems for Deepdive — namely Collect, EventProcessor, and the API that bridges the database and frontend — are written in Python, utilizing FastAPI for API development and Pydantic for schema validation. While this stack performs well and offers a robust foundation, I’ve decided to experiment with rewriting these services in JavaScript. Here’s why:

Challenges with the Current Stack

  1. Switching Between Python and JavaScript: Since the frontend is written in React using JavaScript (specifically JSX), I often find myself constantly switching between Python and JavaScript. This disrupts workflow and creates mental overhead as I adapt to the nuances and best practices of each language.

  2. Variable Naming Differences: Each language has its own best practices and conventions. For example, Python favors snake_case, while JavaScript leans toward camelCase. This discrepancy can lead to inconsistencies or the need for constant translation of variable names between frontend and backend.

  3. Library Ecosystem: External libraries in the JavaScript ecosystem are often better documented and more actively supported compared to Python. This makes solving problems or implementing new features faster and smoother in JavaScript.

The Experiment: Rewriting in JavaScript

To address these challenges, I’m conducting an experiment by rewriting the backend services using Fastify and Joi:

  • Fastify: A fast and lightweight web framework for Node.js, known for its performance and ease of use.

  • Joi: A powerful schema description and data validation library, serving a similar purpose to Pydantic in Python.

Migrating from Redis to RabbitMQ

Alongside this rewrite, I’m also making a key architectural change: switching from Redis to RabbitMQ for queue management.

  • Why RabbitMQ? While Redis and RabbitMQ are both in-memory solutions for managing queues, RabbitMQ has a significant advantage in natively supporting persistence. This means that in the event of a server crash or restart, RabbitMQ can automatically recover in-queue data from disk, preventing data loss.