Boston University Machine Intelligence Community is an organization focused on providing opportunities for undergraduate and graduate students to learn about machine intelligence in a community environment. As a brief history, MIC was founded as a research paper discussion group at MIT in 2016 after being inspired by the AlphaGo versus Lee Sedol match, and was brought to BU in April 2017.

We are a student-led research group sponsored by Boston University's Rafik B. Hariri Institute for Computing and Computational Science and Engineering, BU Software Application and Innovation Lab (SAIL), and BU Spark.


  • Society and Machine Intelligence Discussion

    A bi-weekly discussion about the societal and economic impacts of machine intelligence to build public awareness for the impacts of machine intelligence and to relay the moral responsibility we have as machine intelligence engineers and research scientists to ensure that the technology we develop is used to benefit all for a more hopeful future.

    Location: Hariri Institute Seminar Room
    Date/Time: Tuesdays, 6:00 PM (bi-weekly starting 1.30.2018)

  • TensorFlow Workshop Spring 2018

    This workshop covers the fundamentals of Google's TensorFlow machine learning framework. In this series, you'll build from basic concepts to eventually working on an actual project implementation

    Location: Hariri Institute Seminar Room
    Date/Time: Tuesdays, 6:30 PM (bi-weekly starting 2.21.2018)

  • Fundamentals of Deep Learning Fall 2017

    Seminar series on the fundamentals of deep learning theory and application. Topics: Machine Intelligence, Gradient-Based Learning, Introduction to Neural Networks, Regularization, Compositional Data, Transfer Learning, Sequential Data, Deep Reinforcement Learning, Unsupervised Learning, Neural Style Transfer

  • Research Paper Discussions

    Weekly student-led presentation and discussion covering recent research papers. Checkout our content on GitHub.





Fun fact: {{m.funFact}}

Website: {{m.link}}


Conditioning Deep Generative Raw Audio Models for Structured Automatic Music

Rachel Manzelli, 2017

Existing automatic music generation approaches that feature deep learning can be broadly classified into two types: raw audio models and symbolic models. Symbolic models, which train and generate at the note level, are currently the more prevalent approach; these models can capture long term dependencies of melodic structure, but fail to grasp the nuances and richness of raw audio generations. Raw audio models, such as DeepMind's WaveNet, train directly on sampled audio waveforms, allowing them to produce realistic-sounding, albeit unstructured music. We propose an automatic music generation methodology combining both of these approaches to create structured, realistic-sounding compositions. We consider a Long Short Term Memory network to learn the melodic structure of different styles of music, and then use the unique symbolic generations from this model as a conditioning input to a WaveNet-based raw audio generator, creating a model for automatic, novel music.

Hypergraph Distributed Stochastic Gradient Descent

Justin Chen, 2018

To enable training of deep models on large datasets, learning algorithms must be distributed. Although current state-of-the-art can train on ImageNet in minutes, we push distributed gradient-based optimization to the extemes. We introduce a novel distributed hybrid, globally-asynchronous, locally-synchronous non-convex optimization algorithm for large-scale training. Nodes in the computation, which we refer to as peers, stochastically and independently organize into cliques that each compute a synchronous gradient step across different points in parameter space. The resultant communication topology forms a hypergraph over all peers. We call our algorithm Hypergraph Distributed Stochastic Gradient Descent. Additionally, we developed a minimalist distributed gradient-based learning framework, Simultaneous Multi-Party Learning (SMPL), pronounced [sim-puh l].


Kate Saenko

My research interests are in the broad area of Artificial Intelligence with a focus on Adaptive Machine Learning, Learning for Vision and Language Understanding, and Deep Learning. I am also part of the Artificial Intelligence Research initiative at BU, formed in the fall of 2017 with the goal of promoting AI research and education in the BU community.

Andrei Lapets

Advances in machine learning will have wide-ranging effects on the computational sciences and society at large, and in particular with regard to privacy and cybersecurity. Building interest and awareness on this topic is essential for both productive and responsible engagement with this re-emerging area of research.