我們示範了如何處理video,
現在我們來看audio部份,
原則上,我們可以使用同一個MediaExtractor來同時處理video/audio,
只要透過selectTrack()來動態切換即可,
不過,不知為何常常會收不到資料,
所以這邊我們就用固定的方式,
video和audio都有自己的MediaExtractor ,
audio處理方式和video大同小異,
除了audio不需要render到SurfaceView,
以及audio需要多一個AudioTrack()來處理decode好的PCM data,
private MediaExtractor extractorAudio; private MediaCodec decoderAudio; extractorAudio = new MediaExtractor(); extractorAudio.setDataSource("myTest.mp4"); for (int i = 0; i < extractorAudio.getTrackCount(); i++) { MediaFormat format = extractorAudio.getTrackFormat(i); String mime = format.getString(MediaFormat.KEY_MIME); if (mime.startsWith("audio/")) { audioTrack = i; extractorAudio.selectTrack(audioTrack); formatAudio = format; decoderAudio = MediaCodec.createDecoderByType(mime); sampleRate = format.getInteger(MediaFormat.KEY_SAMPLE_RATE); decoderAudio.configure(format, null, null, 0); break; } } if (audioTrack >=0) { if(decoderAudio == null) { Log.e(TAG, "Can't find audio info!"); return; } else { // create our AudioTrack instance int minBufferSize = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT); int bufferSize = 4 * minBufferSize; playAudioTrack = new AudioTrack( AudioManager.STREAM_MUSIC, formatAudio.getInteger(MediaFormat.KEY_SAMPLE_RATE), AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.MODE_STREAM ); playAudioTrack.play(); decoderAudio.start(); } }
extractorAudio一樣是根據MIME的資訊將audio track找出來,
decoderAudio.configure()我們只要傳進track format即可,其它參數都不需要,
另外,我們還需要多一個AudioTrack(),
因為decoderAudio最後解出來的是PCM data,
要實際發出聲音來,需要AudioTrack()幫忙,
下面是decode部份:
ByteBuffer[] inputBuffersAudio=null; ByteBuffer[] outputBuffersAudio=null; BufferInfo infoAudio=null; if (audioTrack >=0) { inputBuffersAudio = decoderAudio.getInputBuffers(); outputBuffersAudio = decoderAudio.getOutputBuffers(); infoAudio = new BufferInfo(); } boolean isEOS = false; long startMs = System.currentTimeMillis(); long lasAudioStartMs = System.currentTimeMillis(); while (!Thread.interrupted()) { if (audioTrack >=0) { if (!isEOS) { int inIndex=-1; try { inIndex = decoderAudio.dequeueInputBuffer(10000); } catch (Exception e) { e.printStackTrace(); } if (inIndex >= 0) { ByteBuffer buffer = inputBuffersAudio[inIndex]; int sampleSize = extractorAudio.readSampleData(buffer, 0); if (sampleSize < 0) { decoderAudio.queueInputBuffer(inIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM); buffer.clear(); isEOS = true; } else { decoderAudio.queueInputBuffer(inIndex, 0, sampleSize, extractorAudio.getSampleTime(), 0); buffer.clear(); extractorAudio.advance(); } } } int outIndex=-1; try { outIndex = decoderAudio.dequeueOutputBuffer(infoAudio,10000); } catch (Exception e) { e.printStackTrace(); } switch (outIndex) { case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED: Log.d(TAG, "INFO_OUTPUT_BUFFERS_CHANGED"); outputBuffersAudio = decoderAudio.getOutputBuffers(); break; case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED: Log.d(TAG, "New format " + decoderAudio.getOutputFormat()); playAudioTrack.setPlaybackRate(formatAudio.getInteger(MediaFormat.KEY_SAMPLE_RATE)); break; case MediaCodec.INFO_TRY_AGAIN_LATER: Log.d(TAG, "dequeueOutputBuffer timed out!"); break; default: if(outIndex>=0) { ByteBuffer buffer = outputBuffersAudio[outIndex]; byte[] chunk = new byte[infoAudio.size]; buffer.get(chunk); buffer.clear(); if(chunk.length>0){ playAudioTrack.write(chunk,0,chunk.length); } decoderAudio.releaseOutputBuffer(outIndex, false); } break; } // All decoded frames have been rendered, we can stop playing now if ((infoAudio.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) { Log.d(TAG, "OutputBuffer BUFFER_FLAG_END_OF_STREAM"); break; } } } if (audioTrack >=0) { decoderAudio.stop(); decoderAudio.release(); playAudioTrack.stop(); } extractorAudio.release();
和video差不多,就不解釋了,
唯一要注意的是decoderAudio.releaseOutputBuffer(outIndex, false),
後面的參數記得要設為false,也就是不需要render
沒有留言:
張貼留言