I know the name sounds crazy…. actually it kinda does…. maybe AMEN could have been a better choice. Anyway, MEAN was coined affectionately by Valeri Karpov, Kernel Tools Engineer at MongoDB. Just like existing popular web stacks (or software bundle) like LAMP, WAMP etc. MEAN is yet another emerging web/solution stack. MEAN is an acronym for :
MongoDB - a document based NoSQL datastore. It stores data in the form of binary-encoded JSON documents (BSON documents). >> Introduction to MongoDB — MongoDB Manual 2.4.9
ExpressJS - an HTTP server framework based on top of Node >> Express - node.js web application framework
AngularJS - It's a JavaScript framework offering declarative, two-way data binding for web apps >> AngularJS — Superheroic JavaScript MVW Framework
Node.js - Server side JavaScript framework built on Chrome's JavaScript runtime (V8) >> node.js
p.s: This article assumes that the reader has basic understanding of the aforementioned web stack elements. If not, then it's recommended to go through
the links, provided next to each of them, before proceeding with the article.
The MEAN stack is a full JavaScript based stack resulting in substantial homogeneity which lets developers build fast and scalable web apps. For example, a user initiates an operation at the Angular powered frontend which is received by the Express/Node server. This request could then (if needed) be forwarded to perform an operation on the database wherein data is accessed in the form of JavaScript Object Notation (JSON) format. The connecting link is strengthened as the response from the database could be sent back to the frontend user in the same JSON format which is unambiguouslyunderstood by the Angular framework.
This provides a great impetus in developing and debugging web apps as there's a single language format used across. This also reduces the fundamental disconnect often faced between frontend developers and backend developers due to disparity in the
language stack used.
MongoDB
MongoDB is a document based NoSQL datastore wherein data is stored as JSON documents. Let's take an example to understand as to how does Mongo behave differently than an RDBMS. If we are to create an invoice/bill which kind of looks like this:
Then we do find that a single cohesive structure as this invoice actually is a portmanteau of different, yet interconnected parts. The
entire data is assorted (joined) and mapped from different tables into memory and then merged to generate a single document. Digressing a bit, most large scale apps have an object oriented business-logic layer which in turn communicates with a relational database system. At the atomic level, there exist significant differences between the structures defined in the object-oriented and relational domains, between the rate at which both domains respond i.e. OO is much faster than relational. This creates a phenomenon called the Impedance Mismatch which is basically a set of conceptual and technical difficulties arising because of an RDBMS is being consumed by a program written in an Object oriented programming language. Most enterprise try to pacify the situation by using Object-Relational-Mapping(ORM) tools. However, point to be noted is that an ORM tool tries to bridge the
impedance mismatch gap but doesn't remove it.
Mongo being document based non-RDBMS, uses a data model, a document which is a complex data structure and is represented in JSON format. These databases do not have a set schema which is in sharp contrast with relational databases which have a strict schema. The same invoice could also be decorated in JSON format and would look like:
/**JSON document in a mongo database collection, say, invoices */
{
ID: 1001,
customer: "Ann",
line_items: [
{
productID: "0321293533",
quantity: 2,
listPrice: 48,
totalCost: 96
},
…..
],
payment_details:{
Card: "Amex",
CC_Number: 12345,
expiry: "04/2001"
}
}
What's worth noticing is that now the entire document could be stored in a single complex data structure. This helps reduce the number of assortments to be done before bringing the aggregated document into memory.
But what if we want to view data which is different than this e.g. aggregations? What if we want to view total quantity bought for a product across all invoices?
In an RDBMS world, SQL provides aggregations which could be consumed in a fairly simple manner. If table ORDER_LINES had a schema as this:
invoiceID | productID | quantity | cost | total |
---|---|---|---|---|
123 | 24323456 | 12 | 34 | 408 |
123 | 24323457 | 4 | 12 | 48 |
124 | 24323458 | 19 | 56 | 1056 |
125 | 24323456 | 6 | 34 | 204 |
125 | 24323458 | 8 | 12 | 96 |
Then, the aggregation could look something like:
SQL> SELECT productID, SUM(quantity) FROM ORDER_LINES GROUP BY productID;
However, in Mongo environment the same needs to be computed via MapReduce which turns out to be slightly complicated.
/** node-mongodb-native JavaScript implementation */
function map(){
for(var
i=0;i<this.line_items.length;i++){
var key = this.line_items[i].productID;
var value = this.line_items[i].quantity;
emit(key, value);
}
}
function reduce(key, valueSet){
return Array.sum(valueSet);
}
db.collection('invoices')
.mapReduce(
map,
reduce,
{
out: {inline: 1}
},
function(err, results){
if(err) throw err;
else console.log(results);
}
);
Further, as data in MongoDB gets stored as BSON (not exactly JSON) format, there's some overhead involved in converting every record from BSON to JSON format and vice-versa. All computations are performed on the documents by the embedded JavaScript
engine which (by its virtue) lets only one thread run at any moment in time. MapReduce also is intrinsically slower and it's not supposed to run real time. Instead, MapReduce jobs should be run as a background job and the results (collection) be queried real time. At the same time, it's now adviced to use the mongo aggregation framework for performing aggregations which is faster and reliable.
Hence, in my opinion, Mongo is a good choice for applications wherein (otherwise scattered) data could be merged and stored into a single aggregate (abiding by the maximum BSON size limit) and this aggregate is exchanged back and forth between the application and database. Also note that most NoSQL databases like Mongo are weakly typed which gives less leeway for data validation but it is particularly well suited for storing data which is inconsistent or highly subject to prototyping because of changes to the data model.
ExpressJS
Express is a web framework for Node. It provides a set of utilities (middleware), helps create an MVC like application on the server side and allows developers to create custom routes and templates. It behaves very similar to Sinatra in the Ruby world.
AngularJS
Angularjs.org describes it as "A super-heroic JavaScript MVW framework" and I just love the quote. Angular is what HTML should have been. It's a client side templating engine, it's declarative, uses two-way data-binding i.e.
1. View to Model - changes made to the View are propagated to the Model
2. Model to View - the View responds to changes made to the Model
This facilitates automatic synchronization between models and views. In my opinion, it's an MVVM (not an MVC) as the $scope object resembles more like the ViewModel. However I do not intend to start a web war regarding the inherent design pattern and it's worth to put it in the category of MVW (Model-View-Whatever, as in whatever works for you).
It introduces Dependency Injection and is 100% JavaScript. It allows to create extend custom HTML elements thereby extending the existing HTML vocabulary as well.
Angular introduces the concept of directives which helps decouple DOM manipulation from application logic. The directives
control DOM manipulation and the angular controller handles business logic. It also introduces custom routing and template loading as well. Angular has been developed with testability in mind which lets developers test applications built using JavaScript.
Mostly in cases of complex UI interactions involving changes to models and views, an MVVM like Angular is very handy as it features the two-way binding. Generating all content on the server and then sending across the client becomes difficult in these cases when UI interactions soar. In this case, the only pragmatic savior is to push most heavy-lifting to the client side. This is what MVVMs (like Angular) support as they don't bother controllers much and make the View stronger.
Node.js
Again quoting from nodejs.org, "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Although the introduction is self-explanatory, but there a few major takeaways:
1. Node.js is a platform written in C/C++ and is built on top of the V8 runtime.
2. It is single threaded which means irrespective of the number of connections requested, there's always a single Node thread
running. This is in contrast with PHP, Java, ASP .Net based servers where a new thread is instantiated for each client. So it goes
without saying that as the #connections increase, the #threads increase too which generates performance impediments like
context-switching overheads, memory eat-up etc. .
3. It's event-driven. When a node program starts, an event loop starts to which event callbacks (or code to be executed on an event)
are registered. This loop doesn't stop until all the callbacks have been duly executed. Conceptually, it can be thought of as
follows:
Now, the event loop starts picking up callbacks from the event queue and if it needs to execute an I/O like database access or consume a service, then it delegates the action to a common thread pool and resumes picking up other items from the event queue. When the delegated operation returns with a result, thereby injecting another event to the queue, the event loop responds to this new event and processes the results.
4. Node allows to write HTTP servers asynchronously. HTTP by virtue is stateless and synchronous. The only way to convert a
synchronous process to function asynchronously is to decouple the thread which receives the request from the thread which
processes it and this is exactly what Node offers.
5. On a holistic level, applications are either:
a. CPU bound - operations spend most of their time consuming CPU cycles for e.g. visual effects, AI, simulations etc.
b. I/O bound - operations spend most of their time making disk access e.g. reading/writing data from/into disk etc.
Node.js runs on a single thread and starts executing callbacks in the event queue. If the callback needs I/O it delegates to the thread pool and continues responding to other requests. Hence, in my opinion, node is most suited for I/O bound applications wherein there's less CPU usage. This is because all I/O operations in Node are non-blocking. But if there's a CPU intensive job, the single thread will process the request instead of delegating the same to the common thread pool.
However, one may argue that JavaScript has been used extensively for CPU bound jobs on the client side like graph rendering, painting etc. and the very fact that one cannot ensure that an application will never encounter a CPU bound job. So, how
do we handle such scenarios? Well, all I can think of is using Node Clusters which could provide a basis to endure a CPU bound service wherein several Node instances could be run to ensure best CPU usage.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.