Iniubong Obonguko
Follow
Frontend developer, Vue ninja, code enthusiast. Learning every day.
How to create a video and audio recorder in React
February 16, 2023
7 min read
In today’s world, where remote and hybrid work is becoming more popular, organizations need to adopt asynchronous forms of communication in their day-to-day operations. These include recording or taking notes of meetings, using more text forms of communication, etc. There are many applications with these capabilities that make asynchronous communication easier to adopt.
In this article, you’ll learn how to add video and audio recording capabilities to your React applications using the
MediaRecorder API
.
Jump ahead:
Scaffolding a new React app
The MediaRecorder API
Creating the demo app interface
Audio recorder component
Video recorder component
Styling our application
Rendering our components
Component enhancement: audio recorder
Component enhancement: video recorder
Alternatives
Prerequisites
Node.js installed on your machine
Working knowledge of JavaScript and React
Scaffolding a new React app
First, we’ll scaffold a new React application using
Vite, a super-fast JavaScript build tool
:
npm create [email protected]
To answer the prompts that follow the command:
Type in a project name (react-recorder
)
Choose React
as a framework
Select the Javascript
variant
Next, let’s navigate to the newly created project directory, install the required dependencies, and run the development server using the following command:
cd react-recorder && npm i && npm run dev
Once complete, a server will be started on http://localhost:5173/
. Let’s open up the URL on the web browser. We should see the following:
To record audio or video on the browser, we’ll need MediaStream
. MediaStream
is an interface that represents media contents and consists of audio and video tracks.
To obtain a MediaStream
object, you can either use the MediaStream()
constructor or call the following functions: MediaDevices.getUserMedia()
, MediaDevices.getDisplayMedia()
, or HTMLCanvasElement.captureStream()
.
For the sake of this tutorial, we’ll focus on the MediaDevices.getuserMedia
function to create a video and audio recorder.
Creating the demo app interface
In this section, we’ll be creating the demo application’s interface.
Audio recorder component
First, create a file in the src
directory named AudioRecorder.jsx
and paste into it the contents of the following code block:
import { useState, useRef } from "react";
const AudioRecorder = () => {
const [permission, setPermission] = useState(false);
const [stream, setStream] = useState(null);
const getMicrophonePermission = async () => {
if ("MediaRecorder" in window) {
try {
const streamData = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false,
setPermission(true);
setStream(streamData);
} catch (err) {
alert(err.message);
} else {
alert("The MediaRecorder API is not supported in your browser.");
return (
<h2>Audio Recorder</h2>
<div className="audio-controls">
{!permission ? (
<button onClick={getMicrophonePermission} type="button">
Get Microphone
</button>
): null}
{permission ? (
<button type="button">
Record
</button>
): null}
</main>
export default AudioRecorder;
The code block above does the following:
Declares the UI for the audio recorder components
Receives microphone permissions from the browser using the getMicrophonePermission
function
Sets MediaStream
received from the navigator.mediaDevices.getUserMedia
function to the stream
state variable (we’ll get to using that soon)
Video recorder component
Next, let’s create the interface for the video recorder component.
Still in the src
directory, create another file named VideoRecorder.jsx
and paste in the contents of the code block below:
import { useState, useRef } from "react";
const VideoRecorder = () => {
const [permission, setPermission] = useState(false);
const [stream, setStream] = useState(null);
const getCameraPermission = async () => {
if ("MediaRecorder" in window) {
try {
const streamData = await navigator.mediaDevices.getUserMedia({
audio: true,
video: true,
setPermission(true);
setStream(streamData);
} catch (err) {
alert(err.message);
} else {
alert("The MediaRecorder API is not supported in your browser.");
return (
<h2>Video Recorder</h2>
<div className="video-controls">
{!permission ? (
<button onClick={getCameraPermission} type="button">
Get Camera
</button>
):null}
{permission ? (
<button type="button">
Record
</button>
):null}
</main>
export default VideoRecorder;
Similarly to the audio recorder component, the code block above achieves the following:
Declares the UI for the video recorder components
Receives microphone permissions from the browser using the getCameraPermission
function
Sets the MediaStream
received from the getUserMedia
method to the stream
state variable
Styling our application
We won’t need to write too much code to style the application since most of the styling was taken care of during the app scaffolding.
In the index.css
file, located in the src
directory, add the following style at the bottom:
.button-flex {
display: flex;
justify-content: center;
align-items: center;
gap: 10px;
.audio-controls,
.video-controls {
margin-bottom: 20px;
.audio-player,
.video-player {
display: flex;
flex-direction: column;
align-items: center;
.audio-player,
.video-player,
.recorded-player {
display: flex;
flex-direction: column;
align-items: center;
.live-player {
height: 200px;
width: 400px;
border: 1px solid #646cff;
margin-bottom: 30px;
.recorded-player video {
height: 400px;
width: 800px;
Then, change the value of place-items
on the body
element style object from center
to start
:
body {
margin: 0;
display: flex;
place-items: start;
min-width: 320px;
min-height: 100vh;
Rendering the components
To display the newly created components, navigate to App.jsx
and replace its contents with the following block of code:
import "./App.css";
import { useState, useRef } from "react";
import VideoRecorder from "../src/VideoRecorder";
import AudioRecorder from "../src/AudioRecorder";
const App = () => {
let [recordOption, setRecordOption] = useState("video");
const toggleRecordOption = (type) => {
return () => {
setRecordOption(type);
return (
<h1>React Media Recorder</h1>
<div className="button-flex">
<button onClick={toggleRecordOption("video")}>
Record Video
</button>
<button onClick={toggleRecordOption("audio")}>
Record Audio
</button>
{recordOption === "video" ? <VideoRecorder /> : <AudioRecorder />}
export default App;
The code block above renders either the VideoRecorder
or the AudioRecorder
component, depending on the selected option.
Going back to the browser, you should get the following results:
With that done, let’s focus on enhancing the functionality of the components.
Component enhancement: audio recorder
Our audio recorder needs to meet the following requirements:
Stop/start audio recording
Playback and audio download
Stop/start audio recording
Let’s start by declaring our variables and state values.
First, just outside the component’s function scope (because we don’t need it to re-render as component state updates), let’s declare the variable mimeType
:
const mimeType = "audio/webm";
This variable sets the desired file type. Learn more about the MIME type here.
Next, let’s declare the following state variables inside the AudioRecorder
component scope:
const [permission, setPermission] = useState(false);
const mediaRecorder = useRef(null);
const [recordingStatus, setRecordingStatus] = useState("inactive");
const [stream, setStream] = useState(null);
const [audioChunks, setAudioChunks] = useState([]);
const [audio, setAudio] = useState(null);
permission
uses a Boolean value to indicate whether user permission has been given
mediaRecorder
holds the data from creating a new MediaRecorder
object, given a MediaStream
to record
recordingStatus
sets the current recording status of the recorder. The three possible values are recording
, inactive
, and paused
stream
contains the MediaStream
received from the getUserMedia
method
audioChunks
contains encoded pieces (chunks) of the audio recording
audio
contains a blob URL to the finished audio recording
With that out of the way, let’s define the functions that will enable us to start and stop the recording.
Let’s begin with the startRecording
function. Just after the getMicrophonePermission
function, add the following code:
const startRecording = async () => {
setRecordingStatus("recording");
//create new Media recorder instance using the stream
const media = new MediaRecorder(stream, { type: mimeType });
//set the MediaRecorder instance to the mediaRecorder ref
mediaRecorder.current = media;
//invokes the start method to start the recording process
mediaRecorder.current.start();
let localAudioChunks = [];
mediaRecorder.current.ondataavailable = (event) => {
if (typeof event.data === "undefined") return;
if (event.data.size === 0) return;
localAudioChunks.push(event.data);
setAudioChunks(localAudioChunks);
Next, create a stopRecording
function below the startRecording
function:
const stopRecording = () => {
setRecordingStatus("inactive");
//stops the recording instance
mediaRecorder.current.stop();
mediaRecorder.current.onstop = () => {
//creates a blob file from the audiochunks data
const audioBlob = new Blob(audioChunks, { type: mimeType });
//creates a playable URL from the blob file.
const audioUrl = URL.createObjectURL(audioBlob);
setAudio(audioUrl);
setAudioChunks([]);
Next, let’s modify <div className="audio-controls">
to conditionally render the start/stop recording buttons depending on the recordingStatus
state:
<div className="audio-controls">
{!permission ? (
<button onClick={getMicrophonePermission} type="button">
Get Microphone
</button>
) : null}
{permission && recordingStatus === "inactive" ? (
<button onClick={startRecording} type="button">
Start Recording
</button>
) : null}
{recordingStatus === "recording" ? (
<button onClick={stopRecording} type="button">
Stop Recording
</button>
) : null}
Playback and audio download
To play back the recorded audio file, we’ll use the HTML audio
tag.
Under the div
we created for audio-controls
, let’s add the following code:
{audio ? (
<div className="audio-container">
<audio src={audio} controls></audio>
<a download href={audio}>
Download Recording
) : null}
Linking the blob from the recording to the anchor element and adding the download
attribute makes it downloadable.
Now, the audio recorder should look like this:
Component enhancement: Video recorder
Our complete video recorder needs to meet the following requirements:
Real-time video feed
Stop/start video recording
Playback and video download
Real-time video feed
We need to see the camera’s field of view when it’s active to know what area is captured in the recording.
First, let’s set the desired file mimeType
just outside the function scope of the VideoRecorder
component:
const mimeType = "video/webm";
Next, let’s define the required state variables. We’ll go back to the VideoRecorder.jsx
file we previously created:
const [permission, setPermission] = useState(false);
const mediaRecorder = useRef(null);
const liveVideoFeed = useRef(null);
const [recordingStatus, setRecordingStatus] = useState("inactive");
const [stream, setStream] = useState(null);
const [videoChunks, setVideoChunks] = useState([]);
const [recordedVideo, setRecordedVideo] = useState(null);
permission
uses a Boolean value to indicate whether user permission has been given
liveVideoFeed
contains the live video stream of the user’s camera
recordingStatus
sets the current recording status of the recorder. The three possible values are recording
, inactive
, and paused
stream
contains the MediaStream
received from the getUserMedia
method
videoChunks
contains encoded pieces (chunks) of the video recording
recordedVideo
contains a blob URL to the finished video recording
Let’s also modify the getCameraPermission
function to the following:
const getCameraPermission = async () => {
setRecordedVideo(null);
if ("MediaRecorder" in window) {
try {
const videoConstraints = {
audio: false,
video: true,
const audioConstraints = { audio: true };
// create audio and video streams separately
const audioStream = await navigator.mediaDevices.getUserMedia(
audioConstraints
const videoStream = await navigator.mediaDevices.getUserMedia(
videoConstraints
setPermission(true);
//combine both audio and video streams
const combinedStream = new MediaStream([
...videoStream.getVideoTracks(),
...audioStream.getAudioTracks(),
setStream(combinedStream);
//set videostream to live feed player
liveVideoFeed.current.srcObject = videoStream;
} catch (err) {
alert(err.message);
} else {
alert("The MediaRecorder API is not supported in your browser.");
To prevent the microphone from causing an echo during recording, we’ll create two separate media streams for audio and video, respectively, and then combine both streams into one. Finally, we set the liveVideoFeed
to contain just the video stream.
Stop and start video recording
Similar to the audio recorder we created earlier, we’ll start by creating the startRecording
function just below the getCameraPermission
function:
const startRecording = async () => {
setRecordingStatus("recording");
const media = new MediaRecorder(stream, { mimeType });
mediaRecorder.current = media;
mediaRecorder.current.start();
let localVideoChunks = [];
mediaRecorder.current.ondataavailable = (event) => {
if (typeof event.data === "undefined") return;
if (event.data.size === 0) return;
localVideoChunks.push(event.data);
setVideoChunks(localVideoChunks);
Next, we’ll create the function stopRecording
just below the startRecording
function to stop the video recording:
const stopRecording = () => {
setPermission(false);
setRecordingStatus("inactive");
mediaRecorder.current.stop();
mediaRecorder.current.onstop = () => {
const videoBlob = new Blob(videoChunks, { type: mimeType });
const videoUrl = URL.createObjectURL(videoBlob);
setRecordedVideo(videoUrl);
setVideoChunks([]);
Playback and video download
To enable playback and video download, and to see all the changes we’ve made so far, let’s update the HTML section of our component file:
<h2>Audio Recorder</h2>
<div className="audio-controls">
{!permission ? (
<button onClick={getMicrophonePermission} type="button">
Get Microphone
</button>
) : null}
{permission && recordingStatus === "inactive" ? (
<button onClick={startRecording} type="button">
Start Recording
</button>
) : null}
{recordingStatus === "recording" ? (
<button onClick={stopRecording} type="button">
Stop Recording
</button>
) : null}
{audio ? (
<div className="audio-player">
<audio src={audio} controls></audio>
<a download href={audio}>
Download Recording
) : null}
</main>
Now, the video recorder should look like this:
Alternatives to creating your own video and audio recorder
Rather than write all this code to enable audio and video recording in your application, you might want to consider using an external library that is well-optimized for what you’re trying to achieve.
A popular example is the RecordRTC, a flexible JavaScript library that offers a wide range of customization options. Other examples include react-media-recorder, react-video-recorder, etc.
N.B., Remember to do your research before using any of these packages.
Conclusion
In this tutorial, we learned how to build a custom audio and video recorder in React using the native HTML MediaRecorder API and MediaStream API.
All of the source code for this project can be found in this GitHub repository. Feel free to fork the repository and play around with the code. I’d love to see what you can make of it 🙂
Cheers!
Cut through the noise of traditional React error reporting with LogRocket
LogRocket
is a React analytics solution that shields you from the hundreds of false-positive errors alerts to just a few truly important items. LogRocket tells you the most impactful bugs and UX issues actually impacting users in your React applications.
LogRocket
automatically aggregates client side errors, React error boundaries, Redux state, slow component load times, JS exceptions, frontend performance metrics, and user interactions. Then LogRocket uses machine learning to notify you of the most impactful problems affecting the most users and provides the context you need to fix it.
Focus on the React bugs that matter —
try LogRocket today.