Boston University Machine Intelligence Community is an organization focused on providing opportunities for undergraduate and graduate students to learn about machine intelligence in a community environment. As a brief history, MIC was founded as a research paper discussion group at MIT in 2016 after being inspired by the AlphaGo versus Lee Sedol match, and was brought to BU in April 2017.
We are a student-led research group sponsored by Boston University's Rafik B. Hariri Institute for Computing and Computational Science and Engineering, BU Software Application and Innovation Lab (SAIL), and BU Spark.
Executive Board Members
Conditioning Deep Generative Raw Audio Models for Structured Automatic MusicRachel Manzelli, 2017
Existing automatic music generation approaches that feature deep learning can be broadly classified into two types: raw audio models and symbolic models. Symbolic models, which train and generate at the note level, are currently the more prevalent approach; these models can capture long term dependencies of melodic structure, but fail to grasp the nuances and richness of raw audio generations. Raw audio models, such as DeepMind's WaveNet, train directly on sampled audio waveforms, allowing them to produce realistic-sounding, albeit unstructured music. We propose an automatic music generation methodology combining both of these approaches to create structured, realistic-sounding compositions. We consider a Long Short Term Memory network to learn the melodic structure of different styles of music, and then use the unique symbolic generations from this model as a conditioning input to a WaveNet-based raw audio generator, creating a model for automatic, novel music.
Hypergraph Distributed Stochastic Gradient DescentJustin Chen, 2018
To enable training of deep models on large datasets, learning algorithms must be distributed. Although current state-of-the-art can train on ImageNet in minutes, we push distributed gradient-based optimization to the extemes. We introduce a novel distributed hybrid, globally-asynchronous, locally-synchronous non-convex optimization algorithm for large-scale training. Nodes in the computation, which we refer to as peers, stochastically and independently organize into cliques that each compute a synchronous gradient step across different points in parameter space. The resultant communication topology forms a hypergraph over all peers. We call our algorithm Hypergraph Distributed Stochastic Gradient Descent. Additionally, we developed a minimalist distributed gradient-based learning framework, Simultaneous Multi-Party Learning (SMPL), pronounced [sim-puh l].
My research interests are in the broad area of Artificial Intelligence with a focus on Adaptive Machine Learning, Learning for Vision and Language Understanding, and Deep Learning. I am also part of the Artificial Intelligence Research initiative at BU, formed in the fall of 2017 with the goal of promoting AI research and education in the BU community.