About me
I'm Cyrus.I create systems and environments to solve large scale, real world problems. I love big and interesting opportunities.
Working on something amazing?
Have an awesome idea? Let's chat.
Current
Chief Technology Officer.
Owl PracticePast
Infrastructure Engineer.
Addictive MobilityFounder & President.
Extreme ServerzMy biography
I create systems and environments to solve large scale, real world problems. I love big and interesting opportunities.
Timeline
The Start
Extreme Serverz
U of T
Addictive Mobility
Owl Practice
At the age of 10, I became deeply entrenched in the online gaming world - I started my own community for an online modification of GTA on the PC platform. As time passed while playing the modification, my imagination conjured up many absent features that I thought people would love. Ideas in hand, I took action. I created systems interacting externally with the game server to implement these features. The positive feedback received by my innovations was colossal; to this day my community remains a beloved sector of the GTA community as a whole. I was happy and the community was happy.
As this continued to grow in popularity, I recognized the opportunity to establish an online hosting business. Thus, Extreme Serverz was born. Extreme Serverz provided web, game, virtual and dedicated hosting. With solely myself at the helm, I grew this company to a client base of over 200 concurrent clients, reaching over 100,000 users daily. A large part of the survival and growth of the company depended on sound security as well as automated web based systems for billing, control, support and ordering. I was the architect of both.
Once in a while, I had to tear myself away from my computer to get some air. When I wasn't working on Extreme Serverz, creating new scripts, gaming, or implementing new features for clients, soccer was my world. I watched online videos of various players and practiced for countless hours in front of my house, eventually competing in local soccer leagues. Soccer was and continues to be a huge passion in my life and keeps me active!
I continued to grow Extreme Serverz by understanding and meeting my clients' needs. Some particularly unique and valuable propositions I implemented to my hosting services were new network-based attack prevention systems. This feature was exclusive to Extreme Serverz, and its success, as well as positive customer feedback, attracted many new clients.
In 2009, I was accepted into the Computer Science program at the University of Toronto Mississauga. It was at this point that I made the decision to shut down Extreme Serverz in order to focus on my education, and the opportunities that stemmed from it. As much as this was a sad moment in my life, I saw it as a new chapter in which I could utilize my collective experience and knowledge to flourish. I dedicated myself to being the best student I could be, learning and absorbing all of the new information that I could.
I developed close relationships with some of my professors, who were intrigued by my past; we shared ideas and learned from each other. Through hard work and diligence, my name was added to the University of Toronto Dean's List for outstanding achievement in computer science. My time spent at UofT has been invaluable; the knowledge I've gained has enriched my ability to push the limits on the creativity and effectiveness of my current ventures.
During my university career, at the age of 19, I acquired a full time position at AddictiveMobility - a startup in Liberty Village, Toronto. They are an ad-network company specializing in mobile, serving mobile specific targeted ads. I've helped the company grow to what it is today - pivoting from a simple advertising company to one of the world's leading mobile specific ad targeting platforms.
I developed, built and now solely manage the company's entire online infrastructure consisting of 200+ servers, develop back-end systems for the various clusters, as well as migrated our entire platform away from Amazon AWS (see details in projects section below). I currently hold a senior position as an Infrastructure Engineer.
After some time passed at Addictive Mobility, I dedicided I wanted to branch out. I loved what I was doing and learning, but I wanted to branch out into another market and create something new but at the same time have the focus I need for my current day time job. I knew immediately I wasn't going to be able to do this alone. I started reaching out to previous connections trying to come up with something awesome to build that would benifit others globally.
I managed to find and piece together others I knew to start building out an awesome team. Being a small but nimble team we started building a product for private practices targetting psychologists, social workers, and therapists specifically in Canada. Today, Owl Practice plays a massive part of all the various communities surrounding private practice across Canada.
Projects
I create systems and environments to solve large scale, real world problems. I love big and interesting opportunities.
Amazon / Google cloud cost too high? Build it yourself
Public cloud services are great when you're starting out. They offer flexibility, no commitments and these days a preconfigured tool for just about anything. However - what happens when you start scaling out your infrastructure when your business starts /really/ growing? You could just simply spawn more power on-demand with the public cloud platform you're on however this will come at a very high cost. Even after working with account reps and signing into 1 to 3 year contracts the cost is still high! Sometimes upwards to around a million bucks yearly!
I've successfully planned out and executed a full migration from Amazon AWS to bare-metal servers. This not only opened up the ability to harnes more power (upwards to around 400%) but also cut our tech cost saving the company ~$1,000,000 yearly. The migration was planned such that any downtime was mitigated and data loss or corruption was not possible. All systems were setup in a split fashion where half resided in one datacenter and the other half in another - this enabled our systems to be geo-redundant. We used direct fibre connections between each datacenter to ensure private near zero-latency connectivity.
Now that the cost of our tech has been brought down - we have the ability to grow and experiment in ways that were not originally possible. Currently this infrastructure configuration is used to power the entire companies tech-stack.
Countless LAMP configurations
LAMP (Linux, Apache, MySQL, PHP/Perl/Python) is a typical configuration mostly used for handling HTTP web requests with database and software back-end capabilities.
Working with a vast variety of systems and environments involves a broad range of LAMP configurations. I've setup LAMP stacks ranging from serving static and dynamic content to web application layers. With each configuration I've setup, I've tweaked and optimized the web server, the software layer and the server itself with respect to each configuration's specific requirements. Each optimization either speeds up the request, or reduces the amount of overhead enabling more requests to be handled by each server.
One recent project includes an all-in-one, online, privately used SaaS solution. Using the power of Python, PHP, Apache and MySQL (as well as other open source libraries such as Python Tornado) I created a highly interactive system that handles every possible aspect of a therapy clinic. Whether it be client management, invoicing, receipting, scheduling, emailing, graphing, or user management, it handles it all.
Big data? Real time? No problem
What happens when you have gigabytes of incoming data per second that requires aggregation in real time? How do you design a system not only for the trivial case, but the general case of big data aggregation where the amount of data flowing into the system is non-uniform? How do you design it to encapsulate the possibility of errors - enabling the system to still guartnee accuracy independant of the problems faced.
I developed, configured and deployed a system that solves this very problem by compressing, distributing and replicating the incoming streamed data to a multi-node configuration. Using the Map Reduce paradigm, I wrote a system that would continuously deploy jobs into the cluster aggregating the new streamed incoming data from a universal configuration. This configuration can be modified in real time allowing for various aggregation schemes on the fly. The aggregated data would then be placed in various buckets available for any system to utilize.
This system was split up into a series of streamers, eaters, compressors and queues. Each individual system had to be designed taking into consideration where it would be run, what resources would be available, and what underlying systems could fail. Each individual use-case scenario had to be accounted for.
The data stored in the multi-node configuration is also available for any custom jobs. This enables others to use this system, allowing them to conduct their own specific analysis on the incoming data.
This system is horizontally scalable on the fly. The more nodes you add to the configuration, the more space and resources available for either map reduce or the distributed and redundant file system.
To date, this system is responsible for summarizing petabytes of incoming data daily. The summaries, or stats are then stored in a NoSQL database used by multiple web based panels to display results to the end consumer in real time and used for optimizing future decisions of various systems.
Auto scaling. Easy
Say you have a cluster of servers, all behind a series of load balancers, serving both static and dynamic content. In most cases, traffic flow is not uniform. What happens if there is more traffic than your cluster can handle, or, there's just enough traffic that the average response latency has increased slightly over the maximum threshold?
Typically, you would either upgrade your servers or manually add more. In either case, even with the support of server images, this process is slow and requires a substantial amount of effort. Furthermore, because traffic is usually not uniform, you might not have a need for all that power continuously. Why pay for something that is only utilized 10% of the time?
Using various cloud providers' APIs, I was able to create an auto scaling system for various configurations. This system would not only scale on an on-demand basis, but also on trends and patterns ensuring availability of service while minimizing cost overhead.
The scaling required the need to work with dynamic templates, where the deployment scenario would change dependant on user input. A huge part of this included the integration of version control, allowing users to push features, versions and patches using GITHub. Scaling (primarily horizontal) also required the constant awareness of other nodes within its deployment as well as dynamic firewall rulesets. All of this and more made this project very interesting and fun.
Blocking attacks without the source
What happens when you're hosting a compiled web service and start receiving network attacks targeting a specific flaw in the network layer? You essentially have a denial of service attack. Around the age of 12, this specific issue arised when hosting game servers for clients. These individual servers started receiving attacks that rendered the service unuseable.
I have created two systems that help solve this problem. One system monitors incoming traffic for a short, designated time to detectand block the attack and another that prevents the attack before it even occurs.
Using the power of Linux and IPTables, as well as external modules such as length and limit, I was able to determine a pattern during the attack. This allowed me to distinguish an attacker from a non-malicious client. I put together a series of IPTable chains, dropping selected packets that match the pattern.
But it's not done there. I wanted to identify the attacking IP addresses. I could've used the logging module available with IPTables, but this might slow things down while waiting on IO dependant on the magnitude of the attack. Instead, I created a system that would monitor incoming traffic identifying the pattern. Once found, details would be recorded and the address would be firewalled.
Load balancing the load balancer
What happens when your load balancer becomes bogged down from the amount of incoming requests? Whether it be due to the increasing amount of queries per second (QPS), the amount of active connections built up, or even the need to add a more complex firewall rule set, what do you do? Simple; load balance it. How can you balance ensuring load is spread out evenly and latency is not compromised? Placing another balancer in front of the pre-existing balancers not only creates a single point of failure (SPOF), but also increases latency significantly.
Using the power of DNS, I configured each frontend DNS binding to map a differnet set of bindings where each binding pointed to an individual load balancer. The set of bindings produced was also determined by geo location. Geo location was used to optimize the latency between the load balancers and the originating source. By partially shifting the balancing responsibility to the client, this ensured latency was not compromised. I essentially created a load balancing cluster of individual balancers using the round robin paradigm. Various kernel optimizations had to be made to achieve the best performance dependent on the traffic.
Currently, I am working on optimizing this system further, collecting real time stats from each individual load balancer (average latency, amount of errors, system load), and feeding that information to the DNS server which will be used to forumulate its response.