I am an accomplished Lead Software Engineer with a demonstrated history of working in the computer games industry. I am skilled in many programming languages, with a career that has predominantly focused on server side technologies. I am a Lead Software Engineer, working at The Multiplayer Group. I am working with my team to build a matchmaking and game server provisioning solution using Go microservices in Kubernetes.
I have a PhD in Computer Science from University of Nottingham. My Thesis, 'Congestion Control Framework For Delay-Tolerant Communication', was focused on providing a congestion control framework for use in delay and disconnection prone networks and addressed both single copy and replication base message dissemination.
Prior to researching at Nottingham I studied at the University of Leicester, where I obtained a BSc Hons degree in Computer Science. My Dissertation focused on methods for establishing and maintaining a minimum cost coverage set in mobile ad-hoc networks with changing topologies.
The Multiplayer Group offer Co-Dev, Full-Dev and Analytics services. MPG specialise in creating the highest standard of mind-blowing multiplayer experiences for their partners.
As a Lead Software Engineer I am charged with technical decision making, sprint planning, holiday approvals, performance reviewing and generally championing my team. I am a code owner on the Google for Games Open Match project, which I contribute to by fixing bugs, reviewing pull requests, aid in the design of new features and responding to questions in the projects Slack workspace.
The matchmaking & game server provisioning project comprises of a collection of microservices that are written in Golang and integrate with Open Match, Agones, Knative and Kubernetes.
The services are designed to be platform agnostic and are deployed using Terraform, Terragrunt, Helm and Argo CD.
We use Open Telemetry, Prometheus and Grafana to provide insight into the performance of the services.
We leverage Fluentd, Elasticsearch and Kibana in order to aggregate, search and display the log information output from all of the services that we host.
Lockwood Publishing are the creators of Avakin Life; their aim is to become the biggest social and mobile gaming company around.
At Lockwood, I joined an established team with a substantial amount of code already written. My role is focused on software design / development. I work predominently with the Go programming language and with multiple data storage technologies (Postgres, DynamoDB, Redis, Elasticsearch, S3, InfluxDB, Cayley). Below I have listed the projects where I have implemented the server-side code.
Kwalee is an expanding, independent, mobile game developer based in Leamington Spa.
My responsibilities at Kwalee can be grouped into 4 main areas: Game Server Framework Development, Game Specific Development, Infrastructure Development and Team Management, below is a description of each:
Historically Kwalee used a monolithic .NET (VB and C#) codebase and a MySQL database on their game servers. I was employed to re-engineer this service, the main goals for the new system were: horizontal scaleability and low latency globally, in addition to these operation specific goals I added the following Software Engineering goals to the specification: the service should be RESTful, modular and customisable for game specific needs.
This work required the following technologies: Python, Flask, Flask-RESTful, Flask-WTF, Couchbase, Elasticsearch, Celery.
Each game Kwalee creates has elements unique to it, the server has to interact with these components in a meaningful way, as such it did not make sense for these to be developed in a generic way. The requirements gathering for these components were collected from the separate game development teams and focused on providing them with the service they need.
This area of my role involved me to configure the AWS infrastructure and use technologies such as: NGINX, uWSGI and SupervisorD. The NewRelic service provides monitoring and service performance insight.
As the lead of the server team I was charged with sprint planning, performance reviewing and interviewing candidates.
Hive was a marketing, packaging and technology company who were based in Wymeswold Leicestershire. Hive specialised in increasing frequency of purchase and weight of purchase for fast moving consumer goods (FMCG) brands via proof of purchase marketing campaigns.
The responsibilities I had at Hive can be grouped into 3 main areas Marketing Platform Framework Development, Client Specific Development and Infrastructure Development, below is a description of each:
One of my main day-to-day tasks was the maintenance and improvement of 'The Hive Platform'. From joining the company I had been instrumental in modernising the code-base, transitioning from a single monolithic application towards a modular toolchain. This involved substantial refactoring, correcting architectural flaws, removing code smell, lowering coupling and increasing cohesion.
This work required the following technologies: Python, Django, Celery, RabbitMQ, Redis, MySQL, REST, SOAP.
Each client project brought its own set of challenges, this was anything from a simple additional signup requirement to not being able to store any personally identifiable data and having to access all of a consumers details via an API. Typically the hive marketing platform was configurable to facilitate a brands marketing requirements, but when something different came along this was built for the client to meet their specification.
This work required the same technologies as the Marketing Platform Framework Development.
This development was concerned with taking Hive's existing hosting solution (Dedicated Rackspace servers) and making it more robust, better structured and responsive to demand (Rackspace Cloud Servers). This required the following technologies: NGINX, uWSGI, Supervisor, RSyslog, Redis, MySQL, rpmbuild, CFEngine3, Pushover, Zabbix, Python, Django, Rackspace pyrax and paramiko.
One of the main issues facing Hive when I started working for them was the storage and data management of the codes they were generating, encrypting and then later cross-referencing. I developed an algorithm for Hive that, through the use of a combination of Information Theory and Cryptographic techniques, allows unique codes to be produced and redeemed without storing individual codes.
A result of a client request for our code validation API to be able to respond to 50 times more traffic than it was able to process at that time resulted in me implementing a much faster API framework and to re-engineering the code behind the unique code submission process.
As a result of the code algorithm work I carried out, Hive's code generation solution has become much more flexible, allowing it to be distributed. The main benefit of allowing code generation to occur in a decentralised way is that I have been able to provide a code generation library for other systems to generate codes, specifically a DLL for use with C# .NET. This work has required the following technologies: C++, C++/CLI, C#, .NET, as well as Python, Django and REST.
During my time at University, in order to fund my education, I developed websites for small businesses and worked as a freelance developer. During this time I worked with a number of different languages such as: PHP, Pearl, Java, Javascript, Python, XHTML, CSS, XML, XSLT, MySQL and MS SQL.
2008 - 2011: Ph.D. in Computer Science from the University of Nottingham.
2005 - 2008: BSc Hons in Computer Science from the University of Leicester.
2001 - 2003: BTEC National Diploma in Computing from Leicester College.
2008 - 2011 University of Nottingham
Funding: EPSRC
Supervisors: Dr. Milena Radenkovic and Prof. Uwe Aickelin
Detecting and dealing with congestion in delay tolerant networks is an important and challenging problem. Current DTN forwarding algorithms typically direct traffic towards particular nodes in order to maximise delivery ratios and minimise delays, but as traffic demands increase these nodes may become unusable.
This thesis proposes Café, an adaptive congestion aware framework that reduces traffic entering congesting network regions by using alternative paths and dynamically adjusting sending rates, and CafRep, a replication scheme that considers the level of congestion and the forwarding utility of an encounter when dynamically deciding the number of message copies to forward.
Our framework is a fully distributed, localised, adaptive algorithm that evaluates a contact\u2019s next-hop potential by means of a utility comparison of a number of congestion signals, in addition to that contact\u2019s forwarding utility, both from a local and regional perspective. We extensively evaluate our work using two different applications and three real connectivity traces showing that, independent of the network interconnectivity and mobility patterns, our framework outperforms a number of major DTN routing protocols.
Our results show that both Caf\u00e9 and CafRep consistently outperform the state-of-the-art algorithms, in the face of increasing traffic demands. Additionally, with fewer replicated messages, our framework increases success ratio and the number of delivered packets, and reduces the message delay and the number of dropped packets, while keeping node buffer availability high and congesting at a substantially lower rate, demonstrating our framework\u2019s more efficient use of network resources.
I have published work at a number of premium conferences and presented my work both at conferences and university group seminars, which has helped me to develop the ability to communicate in a clear manner. During the course of my Ph.D. I have attended a number of training courses including Planning Research Projects, Statistical Analysis and Data Sampling, Statistical Analysis Using R, Introduction to Teaching, Marking and Assessing and Demonstrating in Computer Science Practicals. Since completing my teacher training I have marked exam papers, demonstrated in labs and given a tutorial lecture. In addition to this I have been a paid supervisor of a Masters student, overseen by my supervisor. The Masters student's dissertation focused on self organised security in mobile ad hoc networks, culminating in an award for best Masters project.
2005 - 2008 University of Leicester
Tutor: Professor Reiko Heckel
Supervisor: Professor Thomas Erlebach
My BSc Dissertation was a continuation of the Nuffield Foundation funded research work I undertook during the summer of 2007. I developed an algorithm that provided a solution for the wireless ad-hoc network routing backbone problem (a minimum cost coverage set of a weighted graph with changing topology). This centred around the observation that by weighting edges in the network graph as a sum of the cost of the nodes that the edges connected, you could then compute a good minimum cost coverage set approximation by means of a distributed minimum spanning tree algorithm. My implementation utilised threaded programming techniques, illustrated the solution as a graph in a GUI and produced a trace file for statistical analysis.
Functional Programming, Logic Programming, Object Orientated Programming, Software Engineering, Internet Programming, Theory of Computation, Discrete Mathematics, System Modelling and Design, Compression Methods, Multimedia, Cryptograph and System Security.
Summer 2007 University of Leicester
Funding: Nuffield Science Bursary
Supervisor: Professor Thomas Erlebach
During the 2 month bursary experience I investigated algorithms for routing backbone construction in wireless ad-hoc networks. After familiarising myself with the relevant literature I implemented two variants of the Wang-Wang-Li (WWL) algorithm and a centralised global greedy approach. I also implemented a graphical user interface for visualising the networks and the computed routing structures.
This page lists my open source projects and contributions.
I am a code owner on the Google for Games Open Match project, which I contribute to by fixing bugs, reviewing pull requests, aid in the design of new features and responding to questions in the projects Slack workspace.
This project is a prototype and I hope, if it yields good results, some of the lessons learned will be adopted by the Open Match project. The original objective of om-stream was to keep the functionality of Open Match but to move away from having singletons like the synchroniser, director and evaluator. The Synchroniser and Director are replaced by using a streaming database and the evaluator is replaced by an atomic database action, these changes should increase scope for capacity and resilience.
I wrote this code because I needed a pure python bitmap index that could serialise to a string. In order to compress the string representation of the binary data I chose run length encoding as it was a good fit for the large, sparse bitmaps I needed to store.