添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
Introducing Galileo AI
LogRocket’s Galileo AI watches every session, surfacing impactful user struggle and key behavior patterns. READ THE
BLOG POST

In this tutorial, we’ll show you how to implement voice assistance in your React app using React Speech Recognition.

React Speech Recognition

We’ll cover the following:

  • What is React Speech Recognition?
  • React Speech Recognition setup and installation
  • Adding React Speech Recognition Hooks
  • Using React Speech Recognition to perform tasks
  • What is React Speech Recognition?

    React Speech Recognition is a React Hook that works with the Web Speech API to translate speech from your device’s mic into text. This text can then be read by your React app and used to perform tasks.

    React Speech Recognition provides a command option to perform a certain task based on a specific speech phrase. For example, when a user asks for weather information, you can perform a weather API call. This is just a basic example, but when it comes to voice assistance and control, the possibilities are endless .

    Browser support

    As of February 2021, React Speech Recognition supports the following browsers:

  • Google Chrome (recommended)
  • Microsoft Edge
  • Google Chrome for Android
  • Android Webview
  • Samsung Internet
  • Unfortunately, iOS does not support these APIs.

    React Speech Recognition setup and installation

    To add React Speech Recognition to your React project, simply open your terminal and type:

    npm i --save react-speech-recognition
    

    When you press enter, this will add the hook to your project.

    Build a demo UI

    To see how the speech recognition Hook works, we’ll build a simple UI.

    First, we’ll add a round button with a mic icon, a button with text to indicate whether or not we are listening to user speech, and a stop button to stop listening.

    Below these elements, we’ll show the user’s speech-to-text translation and create a reset button to clear the text and stop listening.

    Here is our JSX for the component described above:

    // App.js
    function App() {
      return (
        <div className="microphone-wrapper">
          <div className="mircophone-container">
            <div className="microphone-icon-container">
              <img src={microPhoneIcon} className="microphone-icon" />
            <div className="microphone-status">
              Click to start Listening
            <button className="microphone-stop btn" >
            </button>
            <div className="microphone-result-container">
              <div className="microphone-result-text">Speech text here</div>
              <button className="microphone-reset btn" >
                Reset
              </button>
    

    With that set up, we can now add some styling:

    // App.css
      margin: 0;
      padding: 0;
      box-sizing: border-box;
    body {
      background-color: rgba(0, 0, 0, 0.8);
      font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif;
      color: white;
    .mircophone-container {
      display: flex;
      justify-content: center;
      align-items: center;
      width: 100vw;
      height: 50vh;
    .microphone-icon-container {
      width: 100px;
      height: 100px;
      border-radius: 50%;
      background-image: linear-gradient(128deg, #ffffff, #647c88);
      padding: 20px;
      margin-right: 20px;
      position: relative;
      cursor: pointer;
    .microphone-icon-container.listening::before {
      content: "";
      width: 100px;
      height: 100px;
      background-color: #ffffff81;
      position: absolute;
      top: 50%;
      left: 50%;
      transform:translate(-50%, -50%) scale(1.4);
      border-radius: 50%;
      animation: listening infinite 1.5s;
    @keyframes listening{
        opacity: 1;
        transform: translate(-50%, -50%) scale(1);
      100%{
        opacity: 0;
        transform: translate(-50%, -50%) scale(1.4);
    .microphone-icon {
      width: 100%;
      height: 100%;
    .microphone-status {
      font-size: 22px;
      margin-right: 20px;
      min-width: 215px;
    .btn {
      border: none;
      padding: 10px 30px;
      margin-right: 10px;
      outline: none;
      cursor: pointer;
      font-size: 20px;
      border-radius: 25px;
      box-shadow: 0px 0px 10px 5px #ffffff1a;
    .microphone-result-container {
      text-align: center;
      height: 50vh;
      display: flex;
      flex-direction: column;
      justify-content: space-between;
      align-items: center;
      padding-bottom: 30px;
    .microphone-result-text {
      margin-bottom: 30px;
      width: 70vw;
      overflow-y: auto;
    .microphone-reset {
      border: 1px solid #fff;
      background: none;
      color: white;
      width: fit-content;
    

    As you may have noticed, we also included an animation that will play when listening has started, thereby alerting the user that they can now speak.

    Adding React Speech Recognition Hooks

    To use React Speech Recognition, we must first import it into the component. We will use the useSpeechRecognition hook and the SpeechRecognition object.

    To import React Speech Recognition:

    import SpeechRecognition, { useSpeechRecognition } from "react-speech-recognition";
    

    To start listening to the user’s voice, we need to call the startListening function:

    SpeechRecognition.startListening()
    

    To stop listening, we can call stopListening:

    SpeechRecognition.stopListening()
    

    To get the transcript of the user’s speech, we will use transcript:

    const { transcript } = useSpeechRecognition()
    

    This will record the value whenever a user says something.

    To reset or clear the value of transcript, you can call resetTranscript:

    const { resetTranscript } = useSpeechRecognition()
    

    Using the resetTranscript function will set the transcript to empty.

    Finally, to check whether the browser supports the Web Speech APIs or not, we can use this function:

    if (!SpeechRecognition.browserSupportsSpeechRecognition()) {
      // Browser not supported & return some useful info.
    

    Full code

    With everything we’ve reviewed to this point, we are now ready to set up our code. Note that in the block below, we added the listening events and corresponding states:

    import { useRef, useState } from "react";
    import SpeechRecognition, { useSpeechRecognition } from "react-speech-recognition";
    import "./App.css";
    import microPhoneIcon from "./microphone.svg";
    function App() {
      const { transcript, resetTranscript } = useSpeechRecognition();
      const [isListening, setIsListening] = useState(false);
      const microphoneRef = useRef(null);
      if (!SpeechRecognition.browserSupportsSpeechRecognition()) {
        return (
          <div className="mircophone-container">
            Browser is not Support Speech Recognition.
      const handleListing = () => {
        setIsListening(true);
        microphoneRef.current.classList.add("listening");
        SpeechRecognition.startListening({
          continuous: true,
      const stopHandle = () => {
        setIsListening(false);
        microphoneRef.current.classList.remove("listening");
        SpeechRecognition.stopListening();
      const handleReset = () => {
        stopHandle();
        resetTranscript();
      return (
        <div className="microphone-wrapper">
          <div className="mircophone-container">
              className="microphone-icon-container"
              ref={microphoneRef}
              onClick={handleListing}
              <img src={microPhoneIcon} className="microphone-icon" />
            <div className="microphone-status">
              {isListening ? "Listening........." : "Click to start Listening"}
            {isListening && (
              <button className="microphone-stop btn" onClick={stopHandle}>
              </button>
          {transcript && (
            <div className="microphone-result-container">
              <div className="microphone-result-text">{transcript}</div>
              <button className="microphone-reset btn" onClick={handleReset}>
                Reset
              </button>
    export default App;
    

    Using React Speech Recognition to perform tasks

    Now we have set up the app so that when a user clicks on the mic button, the app will listen to their voice and output a transcript below. You will need to allow microphone permission when you run the first time.

    Now comes the fun part: adding commands to perform a task based on user speech/phrases.

    Adding commands

    To add commands, we must pass an array as a commands option to the useSpeechRecognition.
    Before we can do that, however, we must prepare our commands array like so:

    const commands = [
          command: "open *",
          callback: (website) => {
            window.open("http://" + website.split(" ").join(""));
          command: "change background colour to *",
          callback: (color) => {
            document.body.style.background = color;
          command: "reset",
          callback: () => {
            handleReset();
          command: "reset background colour",
          callback: () => {
            document.body.style.background = `rgba(0, 0, 0, 0.8)`;
    

    Remember that commands is the array of JSON object with command and callback properties. For our purposes, the command is the property where you will write the command; callback will fire correspondingly.

    In the example above, you may have noticed that we have passed an asterisk symbol in the first and second command. This symbol allows us to capture multiple words and pass them back as a variable in the callback function.

    You can pass the commands variable to useSpeechRecognition like this:

    const { transcript, resetTranscript } = useSpeechRecognition({ commands });
    

    Now you should be able to run your app and commands.

    For future reference, our full code for the app we created using React Speech Recognition Hooks looks like this:

    import { useRef, useState } from "react";
    import SpeechRecognition, { useSpeechRecognition } from "react-speech-recognition";
    import "./App.css";
    import microPhoneIcon from "./microphone.svg";
    function App() {
      const commands = [
          command: "open *",
          callback: (website) => {
            window.open("http://" + website.split(" ").join(""));
          command: "change background colour to *",
          callback: (color) => {
            document.body.style.background = color;
          command: "reset",
          callback: () => {
            handleReset();
          command: "reset background colour",
          callback: () => {
            document.body.style.background = `rgba(0, 0, 0, 0.8)`;
      const { transcript, resetTranscript } = useSpeechRecognition({ commands });
      const [isListening, setIsListening] = useState(false);
      const microphoneRef = useRef(null);
      if (!SpeechRecognition.browserSupportsSpeechRecognition()) {
        return (
          <div className="mircophone-container">
            Browser is not Support Speech Recognition.
      const handleListing = () => {
        setIsListening(true);
        microphoneRef.current.classList.add("listening");
        SpeechRecognition.startListening({
          continuous: true,
      const stopHandle = () => {
        setIsListening(false);
        microphoneRef.current.classList.remove("listening");
        SpeechRecognition.stopListening();
      const handleReset = () => {
        stopHandle();
        resetTranscript();
      return (
        <div className="microphone-wrapper">
          <div className="mircophone-container">
              className="microphone-icon-container"
              ref={microphoneRef}
              onClick={handleListing}
              <img src={microPhoneIcon} className="microphone-icon" />
            <div className="microphone-status">
              {isListening ? "Listening........." : "Click to start Listening"}
            {isListening && (
              <button className="microphone-stop btn" onClick={stopHandle}>
              </button>
          {transcript && (
            <div className="microphone-result-container">
              <div className="microphone-result-text">{transcript}</div>
              <button className="microphone-reset btn" onClick={handleReset}>
                Reset
              </button>
    export default App;
    

    Conclusion

    By now, you should hopefully have a better understanding of how you can use React Speech Recognition hooks in your project. For future reading, I recommend learning more about programming by voice and the other ways that AI can assist in your coding endeavors.

    Thank you for reading the article. Please leave any feedback or comments below.

    Install LogRocket via npm or script tag. LogRocket.init() must be called client-side, not server-side
  • Script tag
  • (Optional) Install plugins for deeper integrations with your stack:
  •