<!DOCTYPE html>
<html lang="en">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Video To Audio</title>
<link rel="stylesheet" href="index.css"/>
</head>
<button>
<label for="file" id="filename">选择视频文件</label>
<input type="file" name="file" id="file" accept="video/*,audio/*" onchange="fileChange(this)">
</button>
<script src="index.js"></script>
<script>
const fileChange = (input) => {
const file = input.files[0]
if(!file) return
const label = document.getElementById('filename')
label.innerHTML = file.name
videoToAudio(file).then(audio => {
console.log('audio', audio)
audio && (label.innerHTML = audio.fileName)
</script>
</body>
</html>
project:https://github.com/237005722/video-to-audio
demo:https://237005722.github.io/video-to-audio/
javascript video to audio。前端视频转音频。使用FileReader加载视频,然后decodeAudioData对其进行解码,并使用OfflineAudioContext重新渲染,最后将audiobuffer转换为wav。
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>pcmtowav</title>
</head>
纯前端将视频文件处理成音频文件有很多人都觉得不可能,一开始我也觉得做不到,网上查了很多资料,github上也找了,基本上都没有,后来在bejson上看到了视频转音频,然后就copy了一下。
我的处理:
两个处理js文件:deal.js(处理音频文件下载),work.js(处理视频转音频)
deal.js
(function(n) {
var t, i;
if (n.URL = n.URL || n.webkitURL, n.Blob && n.URL) try {
new Blob;
使用 AudioContext 对象播放音频
<进阶>通过 AudioContext 对音频进行精细化处理:失真、滤波,变调
<进阶>通过 AudioContext.createBuffer()生成一段音频
使用 audio 标签播放音频
使用 audio 标签播放音乐, 加载音频文件可以通过直接在标签上的 src 写好,
或通过 audio.setAttribute(“.
获取视频总时长
var elevideo = document.getElementById("video");
elevideo.addEventListener('loadedmetadata', function () { //加载数据
//视频的总长度
```javascript
const audioContext = new (window.AudioContext || window.webkitAudioContext)();
const bufferSize = 2048;
const recordingTime = 5000;
let audioData = [];
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start();
const audioChunks = [];
mediaRecorder.addEventListener("dataavailable", event => {
audioChunks.push(event.data);
setTimeout(() => {
mediaRecorder.stop();
stream.getTracks().forEach(track => track.stop());
const audioBlob = new Blob(audioChunks);
const reader = new FileReader();
reader.readAsArrayBuffer(audioBlob);
reader.onloadend = () => {
audioContext.decodeAudioData(reader.result, (buffer) => {
const audioBuffer = convertBuffer(buffer);
const audioBlob = bufferToBlob(audioBuffer);
sendBlobToServer(audioBlob);
}, recordingTime);
function convertBuffer(buffer) {
const sampleRate = buffer.sampleRate;
const numberOfChannels = buffer.numberOfChannels;
const length = buffer.length;
const newBuffer = audioContext.createBuffer(1, length, sampleRate);
const newChannel = newBuffer.getChannelData(0);
for (let i = 0; i < length; i++) {
let channelSum = 0;
for (let j = 0; j < numberOfChannels; j++) {
channelSum += buffer.getChannelData(j)[i];
newChannel[i] = channelSum / numberOfChannels;
return newBuffer;
function bufferToBlob(buffer) {
const numberOfChannels = buffer.numberOfChannels;
const length = buffer.length;
const newBuffer = new ArrayBuffer(length * numberOfChannels * 2);
const newView = new DataView(newBuffer);
for (let i = 0; i < length; i++) {
let offset = i * numberOfChannels * 2;
for (let j = 0; j < numberOfChannels; j++) {
let sample = Math.max(-1, Math.min(1, buffer.getChannelData(j)[i]));
sample = (sample + 1) / 2 * 65535;
newView.setInt16(offset, sample, true);
offset += 2;
return new Blob([newView], { type: "audio/wav" });
function sendBlobToServer(blob) {
const xhr = new XMLHttpRequest();
xhr.open("POST", "/api/upload-audio", true);
xhr.setRequestHeader("Content-Type", "audio/wav");
xhr.send(blob);
此代码通过调用Web API中的`getUserMedia`方法来获取音频流,使用`MediaRecorder`对象来记录音频数据,然后将记录的音频数据转换为blob,并使用`FileReader`对象将其读入内存中。然后,使用`decodeAudioData`方法将Blob数据解码为音频缓冲区,然后使用`convertBuffer`函数将音频缓冲区转换为单声道的16k16bits音频缓冲区,最后将缓冲区转换为Blob并发送到后端。
请注意,此示例代码仅用于参考。实际应用中可能需要根据具体需求进行修改。
javascript video to audio demo。前端视频转音频。FileReader,decodeAudioData,OfflineAudioContext
weixin_51229618: