I joined the MAG group in Fall 2013 to do research in (broadly) support for concurrent/parallel programming. My work is primarily in runtime support for concurrent datastructures in non-garbage-collected languages like C/C++. The intention is to facilitate the use of advanced, multicore-friendly datastructures and algorithms in performance-oriented applications, where they are hardest to use.
Prior to joining MIT, I worked as a software engineer for Cilk Arts and Intel, where I contributed to the Cilk programming language, runtime, and tools; and then at VeloBit, where we developed kernel-level caching software for Windows and Linux. Although that makes me something of a systems guy working for a theory group, part of the impetus for coming back to school was that I valued a sound and rigorous theoretical basis for the systems we build.
On a more personal note: I like playing sandbox-style computer games like Minecraft and Kerbal Space Program. They're good creative outlets, and I like to build castles in the former, and minimal landers in the latter. I like to sing... a lot. The theory group sometimes goes out for karaoke, and I sometimes sing with my church choir. I also think beards are neat, and if you can grow a good one, it is your responsibility to the rest of humanity to do so.
The free lunch (for programmers) of waiting for faster computers to run bigger applications is over. It's been over for years. Applications are now being rewritten to take advantage of many-core architectures. To facilitate this transition, researchers have developed low-contention data structures that scale with core count. These "concurrent data structures" are designed to hide the complexities of inter-thread communication so that programmers can operate on them naively, as they would any serial data structure.
Concurrent data structures are hard to clean up after, though. Reclaiming memory is easy in garbage collected languages, like Java, but languages like C/C++ have been resistant to GC because of its overheads, both in total CPU time and in application latency. Unfortunately, without GC concurrent data structures are hard to use -- memory reclamation techniques tend to be expensive, complex, unmaintainable, or all of the above. The properties that made the data structures desirable tend to disappear without GC.
Thus, concurrent data structures have given C/C++ programmers a new reason to want GC. We're developing a garbage collector intended for use with concurrent data structures in C/C++ with minimal visible overhead. This brings these languages into the fore for highly concurrent applications that need performance and scalability.
More broadly, we're developing a supporting infrastructure for low-level programmers to take advantage of many cores without either getting bogged down in the inter-thread communication and clean-up, or paying a performance, latency, or scalability penalty for doing so.
- 6.820: Foundations of Program Analysis: Fall 2013
- 6.840: Theory of Computation: Fall 2013
- 6.836: Multicore Programming: Spring 2014
- 6.878: Computational Biology: Fall 2014
- 6.831: User Interface Design: Spring 2015