The Twelve Days of AWS: Aurora

12 Days of AWS Day 12 written around snowflakes with a penguin ice fishing

We've made it to the final day, the 12th day of AWS!

Talking of RDBMSs, with all of the Data Lakes, Data Warehousing, NoSQL solutions available on AWS, it is nice to still have access to a good old RDBM, which Aurora provides, with both MySQL and PostgreSQL flavors.

While dealing with an aforementioned (LINK TO LAMBDA POST) performance limit on Lambda processing large PDF files, one option was to split the actual amount of byte data being read into the PDF processing library and append the pages together at the end of the process. 

While S3 can be used to keep track of events by creating some form of a key object, and even assigning metadata to that object, it is neither fast, not transactionally tolerant.

Just having a table entry for each S3 object being parsed, with columns to keep track of the amount of data parsed and the state and location of the constituent parts, was a lot easier to accomplish in Aurora.

Just as you can have an EC2 instance with code, and a custom notification system and a database to back it up you could bundle Lambdas, SNS, and Aurora and do the whole thing without having to do any DevOps!

For someone like me who would rather never ever have to take on the role of Sys Admin, being able to use these ‘serverless’ solutions can be extremely gratifying. While performance and cost may vary, at least in the short term they provide a lot of flexibility with fast lead times for proof of concept, and longer-term discussions regarding the most optimal solution can be had later on.


I hope that at least these little snippets of information about AWS have provided some insight into the mysterious world of weird names and acronyms, and exposed them for what they are, just a collection of very useful tools for the modern web developer amongst others