It just works.
- install
platform-io
. - install
python
andpyaudio
,numpy
,matplotlib
libraries. - modify the code to fit your wifi settings.
- compile and upload firmware to
esp8266-12f
. - solder four MAX7219 8*8 led matrix modules:
ESP8266
|
V
[matrix 1]-DOUT > DIN-[matrix 2]-DOUT > DIN-[matrix 3]-DOUT > DIN-[matrix 4]
-CS > CS- -CS > CS- -CS > CS-
-CLK > CLK- -CLK > CLK- -CLK > CLK-
-VCC > VCC- -VCC > VCC- -VCC > VCC-
-GND > GND- -GND > GND- -GND > GND-
<-----------L CHANNEL----------> <-----------R CHANNEL---------->
HIGH FREQ LOW FREQ LOW FREQ HIGH FREQ
- connect the first module
VCC - 5V
,GND - GND
,DIN - D7
,CS - D6
,CLK - D5
.
NOTE: Check whether VCC-GND is short-circuited before powering on.
- Powering on, and run
server/main.py
. - if
server/main.py
crashed, please changeinput_device_index
at line 29, or submit an issue. - run
pavucontrol
, onrecord
tab, change the record device forALSA plug-in [python xx]: ALSA Capture
. - enjoy!
This code creates a real-time audio visualizer. It imports the necessary libraries, sets the chunk size and sampling rate, connects to a socket, and then finds the device index for the audio input. It then creates a plot with two axes, one for the left channel and one for the right channel. It then reads in the audio data from the stream, splits it into left and right channels, and performs a Fourier transform on each channel. It then takes the absolute value of the Fourier transform and applies a logarithmic scale to it. It then takes the maximum value from each frequency range and stores it in an array. It then sets the y-data for the plot to the array values, and sends the array values over the socket. Finally, it draws the plot and flushes the events.
- imports the required libraries:
pyaudio
,numpy
,matplotlib
,math
, andsocket
. - sets the
CHUNK_SIZE
andSAMPLING_RATE
constants for reading audio data from the microphone. - creates a socket and connects to a specific IP address and port, which is used to send the audio data to another device.
- creates an instance of the
pyaudio.PyAudio()
class and loops through all available audio input devices to find a suitable device for recording audio. - creates a plot with two axes, one for the left channel and one for the right channel, using the
matplotlib.pyplot.subplots()
method. - generates a sequence of 16 equally spaced numbers between 0 and 16 using the
numpy.linspace()
method, which is used as the x-axis data for the plot. - initializes two lines for the plot, one for the left channel and one for the right channel, each containing random y-axis data of length 16.
- sets the limits and labels for the x- and y-axes of each subplot.
- displays the plot using the
matplotlib.pyplot.show()
method. - initializes four arrays of length 16, which store the maximum values of the audio data in specific frequency ranges.
- initializes a
bytearray
of length 64, which is used to send the audio data over the socket. - enters a loop that repeatedly reads a chunk of audio data from the microphone using the
stream.read(CHUNK_SIZE)
method. - converts the audio data to a numpy array of 32-bit floating point data.
- divides the audio data into two channels, one for the left channel and one for the right channel.
- performs a Fast Fourier Transform (FFT) on each channel to convert the audio data from the time domain to the frequency domain.
- takes the absolute value of the FFT results and applies a logarithmic scale to them using the
numpy.log10()
method. - finds the maximum value in each of 16 frequency ranges for both the left and right channels, and stores them in separate arrays.
- updates the y-axis data for the plot with the maximum values stored in the arrays.
- caps the maximum value of each array at 7 using a thresholding mechanism.
- stores the maximum values in the arrays, as well as the unfiltered maximum values, in the
bytearray
. - sends the
bytearray
over the socket. - redraws the plot and flushes any events using the
matplotlib.pyplot.draw()
andmatplotlib.pyplot.flush_events()
methods.
This process is repeated continuously in a loop, resulting in a real-time audio visualizer that displays the amplitude of specific frequency ranges in the audio data being recorded. The visualizer also sends this data to another device over a socket connection.
This code sets up a connection to a WiFi network, and then listens for UDP packets on port 1234. When it receives a packet, it draws a graph of the data on a display.
- The code includes the necessary libraries for connecting to a WiFi network, using SPI and I2C, and for displaying data on a display.
- It defines the SSID and password of the WiFi network to connect to.
- It sets up the static IP, gateway, subnet, and DNS for the connection.
- It initializes the display.
- In the
setup()
function, it starts the serial connection, initializes the display, connects to the WiFi network, and begins listening for UDP packets on port 1234. - In the
loop()
function, it checks if there is a UDP packet available. - If there is, it reads the packet into a buffer, checks that the packet size is 64 bytes, and then draws a graph of the data on the display.