Evening Session

Max Summer School in Geidai 2023

Anyone can watch this program.

July 31, 18:00

Introduction to Mubu

Presenter: Suguru Goto

Overview

This presentation introduces Mubu conjunction with Machine Learning. This is a Max library developedat IRCAM in France. This is primarily a toolbox for multimodal analysis of sounds and gestures,interactive sound synthesis, and machine learning. Also this includes real-time and batch data processing(sound descriptors, motion functions, filtering), granular synthesis, concatenative synthesis, additivesynthesis, data visualization, static and temporal recognition, regression algorithms, and more.Mubu can be downloaded via Package Manager within Max.
https://ircam-ismm.github.io/max-msp/mubu.html

Biography

Suguru Goto is a composer/performer, an inventor and a multimedia artist and he is considered one of themost innovative and the mouthpiece of a new generation of Japanese artists. He is highly connected totechnical experimentation in the artistic field and to the extension of the existing potentialities in therelation man-machine. In his works the new technologies mix up in interactive installations andexperimental performances; he is the one who invented the so-called virtual music instruments, able tocreate an interface for the communication between human movements and the computer, where soundand video image are controlled by virtual music instruments in real-time through computers. Lately, hehas been creating the robots, which perform acoustic instruments, and he is in process of constructing arobot orchestra.He has been internationally active and has received numerous prizes and fellowships, such asKoussevitzky Prize, BSO fellowships, the first prize at the Marzena, Berliner Kompositionaufträge, aprize by the IMC International Rostrum of Composers in UNESCO, Paris, DIRECAM, French CulturalMinister, Music Theater Award 2008 in Berlin, “OFQJ-dance and new technology prize” at BainsNumérique #4, International Festival of digital art of Enghein-Les-Bains, in France, in 2009 and so on.
His works have been performed in major festivals, such as Resonaces/IRCAM, Sonar, ICC, Haus derKultures der Welt, ISEA, NIME, AV Festival, STRP Festival 2009, Venice Biennale, etc.
http://gotolab.geidai.ac.jp/

August 1, 18:00

Toward construction of controller and parameter estimation of physical modeling sound source

Presenter: Tsubasa Tanaka

Overview

A physical model sound source that simulates the sound production mechanism of a real musicalinstrument can generate highly realistic sounds and meta-physical sounds with parameter settingsdifferent from those of the real thing. However, it is difficult to control the variety of tones that can begenerated as desired. Therefore, we constructed a timbre controller based on Max by mapping high-dimensional acoustic features to low-dimensional using Machine Learning. We also try to estimate theparameters to achieve a timbre that is close to the sound of a real instrument.

Biography

Music information science researcher, composer. He graduated from Kyoto University, Faculty ofScience, Department of Mathematics. He completed the master's course at the University of Tokyo,Graduate School of Information Science and Technology. He completed a doctoral course at TokyoUniversity of the Arts, Department of Intermedia Art. He has stayed in Paris from 2014 to 2021,conducting postdoctoral research at IRCAM, the Jussieu Paris Left Bank Mathematics Institute, and theSorbonne University. His specialties are algorithmic composition, mathematics of music writing, andautomatic music analysis. In 2016, he gave a rigorous mathematical formulation to composer MiltonBabbitt's all partitions array generation problem and successfully solved it. In 2017, he supervised theconcert "AI Composition and Computational Creativity" at the Artificial Intelligence Aesthetics and ArtExhibition held at the Okinawa Institute of Science and Technology Graduate University. His algorithmiccomposition work was selected at SMC2018 and ICMC2018.

August 2, 18:00

Audio Reactive Expression Using Stable Diffusion

Presenter: Toru Yokoyama

Overview

This uses the stable diffusion API with Max8's Node for Max.
Using a technique called img to img, the video that reacts to the audio made simply with jitter is used togenerate an image from the prompt to the AI.

Biography

Born in Fukuoka Prefecture in 1983. He graduated from the Institute of Advanced Media Arts andSciences (IAMAS). He is researching new forms of photographic expression possible only with digitaltechnology, referring to the history of photography from the silver halide era. At the same time, he alsocreates installations based on multimedia expressions and real-time 3D expressions that make full use ofprogramming. Exhibitions he has participated in in recent years include “RGB” (2015/the newly/MisterHollywood OSAKA), “Media Ambition Tokyo” (2016/Tokyo), “William Klein: Certain Hearts and Eyes”(2017/Tokyo), “Jerusalem”. Design Week” (2019/Israel), etc.Since 2018, he is also a member of the design group v0id, which consists of members with backgroundsin architecture, photography, graphics, and programming.FIGLAB/amana inc. Part-time lecturer at the Faculty of Music, Tokyo University of the Arts

August 2, 18:30

Practical Applications of Max for Live in Live Electronics

Presenter: Haolun Gu

Overview

Ableton Live and Max are often combined for live electronics performances. In particular, when manyexisting plug-ins and pre-made sound sources built into Ableton Live are used, control from the Maxside, so-called parameter mapping function by communication between both software, is required. In thispresentation, we will focus on the creation and application of Max for live, which is applied to this, andexplain it along with the practice in an actual live electronic environment.

Biography

A native of Soochow, China.GU is a composer and pianist, received his Bachelor’s degree from Shanghai Conservatory of Music in2017, and completed his Master degree in Tokyo University of the Arts in 2020. Now he is a PhD studentin Tokyo University of the Arts. He has been awarded scholarship by the Japanese Ministry of Education,Culture, Sports, Science and Technology (MEXT) from 2021.He received the 1 st  Prize at the 3 rd  Shanghai International Electronic Music Competition (IEMC) inShanghai. His works have been commissioned and selected by The Centennial of the Electronic MusicalInstruments at National Museum of Nature and Science (Tokyo), Future-Tradition New MediaMasterclass Series (Shanghai), EMW (Shanghai), NYCEMF (New York), ICMC (Santiago de Chile) aswell as Ensemble H[akka] 20 th  anniversary concert “Hiroshima and music” (Hiroshima), and gotcommission from the 7 th  Ryokoku Art Festival.He has studied composition with Yi Qin, Mingwu Yin, Qiangbin Chen, Eric Arnal, Tatsuhiko Nishioka,and Suguru Goto.

August 3, 18:00

Method for Automatic Instrument Playing Techniques Recognition with Max

Presenter: Nicolas BROCHEC

Overview

This presentation outlines a method for automatic instrument playing technique recognition in Max. The idea is to use deep learning to recognize various techniques, such as staccato, flutter, and vibrato, in real-time and automatically switch sound effects. We will discuss previous research, introduce our method, and demonstrate its implementation in Max.

Biography

Nicolas Brochec is a French composer and computer music designer. He graduated from the Music Theory program at the Graduate School of Music at Paris 8 University, and the Composition program at the Graduate School of Composition at the Académie Supérieure de Musique de Strasbourg. He is currently a PhD candidate at Music Creativity and the Environment at Tokyo University of the Arts. Recommended by the Embassy of Japan in France, he is recipient of the MEXT Research Scholarship for 3 years. His musical works have been performed in various parts of Europe and he has received multiple awards. His current research focuses on instrument playing techniques recognition for mixed music.
Website: nicolasbrochec.com

August 3, 18:30

About the reproduction of the DISPLAY language used in the festival square (Expo '70) by MAX

Presenter: Hideaki Isobe

Overview

At the "Japan World Exposition" Expo '70 held in 1970, various pavilions and buildings were built usingthe cutting-edge technology of the time. Among them, the festival square was built as the central buildingof the Expo as a place to hold various events such as opening and closing ceremonies and music concerts.A multi-channel sound system and a computer-controlled system to control it were installed in the festivalplaza, where many speakers were installed to provide sound for various events. The computer control ofthe sound system used a language called DISPLAY, which was originally developed at the time.

Biography

Born in Yamanashi in 1982. Composer, media artist. He studied composition, sound technology, andcomputer music under Takeshi Tsuchiya. In addition to his computer-based compositional activities, healso operates the electronic works of various other composers. His own major works have been performedin Japan, Holland, Germany and South Korea. In addition, he researches and develops electronic musicalinstruments that use sensors, as well as performance aids. In collaboration with various composers andmusicians, he has developed the "isobe rail" and other products, such as the "Videolon" developed withcomposer Kazutomo Yamamoto, the "hosiya board" developed with Takeo Hoshiya, and the tromboneplayer Murata. There is also the "murata sensor" for the trombone developed with Kosei. Sponsored byElectroacoustic Music Concert Maximum. He is a part-time lecturer at Tokyo College of Music.