I joined the MAG group in Fall 2013 to do research in (broadly) support for concurrent/parallel programming. My work is primarily in runtime support for concurrent datastructures in non-garbage-collected languages like C/C++. The intention is to facilitate the use of advanced, multicore-friendly datastructures and algorithms in performance-oriented applications, where they are hardest to use.
Prior to joining MIT, I worked as a software engineer for Cilk Arts and Intel, where I contributed to the Cilk programming language, runtime, and tools; and then at VeloBit, where we developed kernel-level caching software for Windows and Linux. Although that makes me something of a systems guy working for a theory group, part of the impetus for coming back to school was that I valued a sound and rigorous theoretical basis for the systems we build.
On a more personal note: I like playing sandbox-style computer games like Minecraft and Kerbal Space Program. They're good creative outlets, and I like to build castles in the former, and minimal landers in the latter. I like to sing... a lot. The theory group sometimes goes out for karaoke, and I sometimes sing with my church choir. I also think beards are neat, and if you can grow a good one, it is your responsibility to the rest of humanity to do so.
For decades, high performance code has been written in C and C++ because they're "close to the machine" in that types and operations are easily translated into machine types and operations. Moreover, they inject no runtime safety checking that creates performance or memory overhead, and provide no garbage collector that introduces unpredictable overheads and latency. This design philosophy worked in the era of single-core computers, when programmers were concerned with making use of every last cycle.
In the multi-core era, however, scalability has become an integral factor in performance. Higher-level languages are far better suited for parallelism and concurrency because they provide abstractions for load balancing for tasks, memory reclamation for concurrent data structures, and hybrid transactions (transactions that supplement hardware primitives with software that guarantees forward progress). Scalability, therefore, tends to come at the cost of highly optimized code on a per-task basis.
This tradeoff is not inherent. My work is primarily in making scalability-oriented abstractions available to C/C++ codebases. Forkscan is automatic concurrent memory reclamation for low-level programming languages. It is not general garbage collection, but is special-purpose for concurrent data structures, making them available to C/C++ without the overheads and complexity of hazard pointers, reference counting, or epoch-based tracking.
- 6.820: Foundations of Program Analysis: Fall 2013
- 6.840: Theory of Computation: Fall 2013
- 6.836: Multicore Programming: Spring 2014
- 6.878: Computational Biology: Fall 2014
- 6.831: User Interface Design: Spring 2015
- 6s898: Advanced Multicore Performance Engineering: Spring 2017
- 6.885: Multicore Programming in DEF: Fall 2017