程序師世界是廣大編程愛好者互助、分享、學習的平台,程序師世界有你更精彩!
首頁
編程語言
C語言|JAVA編程
Python編程
網頁編程
ASP編程|PHP編程
JSP編程
數據庫知識
MYSQL數據庫|SqlServer數據庫
Oracle數據庫|DB2數據庫
 程式師世界 >> 編程語言 >> C語言 >> C++ >> 關於C++ >> 在DirectShow中支持DXVA 2.0(Supporting DXVA 2.0 in DirectShow)

在DirectShow中支持DXVA 2.0(Supporting DXVA 2.0 in DirectShow)

編輯:關於C++

在DirectShow中支持DXVA 2.0(Supporting DXVA 2.0 in DirectShow)。本站提示廣大學習愛好者:(在DirectShow中支持DXVA 2.0(Supporting DXVA 2.0 in DirectShow))文章只能為提供參考,不一定能成為您想要的結果。以下是在DirectShow中支持DXVA 2.0(Supporting DXVA 2.0 in DirectShow)正文


  這幾天在做dxva2硬件減速,找不到什麼材料,翻譯了一下微軟的兩篇相關文檔。並預備記載一下用ffmpeg完成dxva2,將在第三篇寫到。這是第二篇。,英文舊址:https://msdn.microsoft.com/en-us/library/aa965245(v=vs.85).aspx 
第一篇翻譯的Direct3D device manager,鏈接:http://www.cnblogs.com/betterwgo/p/6124588.html

  本主題描繪如何在DirectShow的解碼器中支持DirectX Video Acceleration (DXVA) 2.0。詳細而言,是描繪解碼器與視頻渲染器之間的聯通(communication )。本主題不描繪如何完成DXVA解碼。

1.預備(Prerequisites)

   本主題假定你熟習如何寫DirectShow過濾器。更多信息請參考DirectShow SDK文檔的Writing DirectShow Filters主題(https://msdn.microsoft.com/en-us/library/dd391013(v=vs.85).aspx )。代碼簡例假定解碼器承繼自CTransformFilter類,定義如下:

class CDecoder : public CTransformFilter
{
public:
    static CUnknown* WINAPI CreateInstance(IUnknown *pUnk, HRESULT *pHr);

    HRESULT CompleteConnect(PIN_DIRECTION direction, IPin *pPin);

    HRESULT InitAllocator(IMemAllocator **ppAlloc);
    HRESULT DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pProp);

    // TODO: The implementations of these methods depend on the specific decoder.
    HRESULT CheckInputType(const CMediaType *mtIn);
    HRESULT CheckTransform(const CMediaType *mtIn, const CMediaType *mtOut);
    HRESULT CTransformFilter::GetMediaType(int,CMediaType *);

private:
    CDecoder(HRESULT *pHr);
    ~CDecoder();

    CBasePin * GetPin(int n);

    HRESULT ConfigureDXVA2(IPin *pPin);
    HRESULT SetEVRForDXVA2(IPin *pPin);

    HRESULT FindDecoderConfiguration(
        /* [in] */  IDirectXVideoDecoderService *pDecoderService,
        /* [in] */  const GUID& guidDecoder, 
        /* [out] */ DXVA2_ConfigPictureDecode *pSelectedConfig,
        /* [out] */ BOOL *pbFoundDXVA2Configuration
        );

private:
    IDirectXVideoDecoderService *m_pDecoderService;

    DXVA2_ConfigPictureDecode m_DecoderConfig;
    GUID                      m_DecoderGuid;
    HANDLE                    m_hDevice;

    FOURCC                    m_fccOutputFormat;
};

本主題中,解碼器是指decoder filter,包括接納緊縮視頻數據到輸入解緊縮的視頻數據的進程。解碼設備指圖形驅動所完成的硬件視頻減速器。

一個解碼器要支持DXVA 2.0必需有以下根本步驟:

(1)確定一個文件類型(團體了解:應該是指依據獲取到的原文件類型,找到DXVA2對應的文件類型。比方ffmpeg獲取到了文件類型,要知道這個文件類型在DXVA2中對應的是什麼文件類型)

(2)找到對應的DXVA解碼器配置

(3)告知視頻渲染設備解碼器用的是DXVA

(4)提供一個客戶分配器來分配Direct3D surfaces.

原文:

2.變卦提示(Migration Notes)

    假如你是從DXVA 1.0變卦到DXVA 2.0,你需求留意這兩個版本之間的以下一些嚴重區別:

(1)DXVA 2.0不運用 IAMVideoAccelerator 和 IAMVideoAcceleratorNotify 接口,由於解碼器可以經過 IDirectXVideoDecoder 接口直接取得DXVA 2.0 的API

(2)確定文件類型時(原文:During media type negotiation),解碼器不必video acceleration GUID做為子類型,子類型直接為和軟解一樣的解緊縮的視頻格式(如NV12)

(3)配置減速器的流程變卦了。在DXVA 1.0 ,解碼器調用帶DXVA_ConfigPictureDecode構造的Execute函數來配置減速器。在DXVA 2.0中,解碼器用IDirectXVideoDecoderService接口來配置,下一局部將會講到。

(4)由解碼器來分配解緊縮數據的緩存,不再由視頻渲染器來做這項任務。

(5)不再用IAMVideoAccelerator::DisplayFrame來顯示解碼幀,與軟解一樣,解碼器調用IMemInputPin::Receive函數把解碼幀數據傳給渲染器

(6)解碼器不再反省什麼時分數據緩存是平安可更新的(原文:The decoder is no longer responsible for checking when data buffers are safe for updates)。因而DXVA 2.0沒有任何辦法(或函數,原文:method)是與IAMVideoAccelerator::QueryRenderStatus等效的。

(7)子像素混合(原文:Subpicture blending)由視頻渲染器調用DXVA2.0視頻處置API來做。提供子像素的解碼器(如DVD解碼器)該當把子像素數據發送到一個獨立的輸入Pin。(原文:Subpicture blending is done by the video renderer, using the DXVA2.0 video processor APIs. Decoders that provide subpictures (for example, DVD decoders) should send subpicture data on a separate output pin.)

    關於解碼操作,DXVA 2.0與DXVA 1.0用的相反的數據構造(原文:data structures)。(團體了解:這裡的數據構造應該是指存儲數據的構造體)

    EVR過濾器支持DXVA 2.0。視頻混合器(原文:Video Mixing Renderer filters)(VMR-7 和 VMR-9)僅支持DXVA 1.0。

3.查找解碼器配置(Finding a Decoder Configuration)

    解碼器確定了輸入媒體類型後,必需給DXVA解碼器設備找到一個兼容的配置。你可以在輸入Pin的CBaseOutputPin::CompleteConnect辦法中完成這個步驟。這一步確保圖形驅動器在解碼器用DXVA之前支持解碼器所需求的才能(原文:This step ensures that the graphics driver supports the capabilities needed by the decoder, before the decoder commits to using DXVA.)。

    以下是為解碼器設備查找配置:

1)為IMFGetService接口查詢渲染器輸出Pin

2)調用IMFGetService::GetService以獲取IDirect3DDeviceManager9接口的指針。這項服務的GUID是MR_VIDEO_ACCELERATION_SERVICE。

3)調用IDirect3DDeviceManager9::OpenDeviceHandle以獲取渲染器的Direct3D 設備的句柄。

4)調用IDirect3DDeviceManager9::GetVideoService並傳入設備句柄。這個辦法前往一個指向IDirectXVideoDecoderService接口的指針。

5)調用IDirectXVideoDecoderService::GetDecoderDeviceGuids。這個辦法前往一個解碼設備GUID的數組。

6)循環查找解碼器GUID數組找到解碼器支持的GUID。如,一個MPEG-2解碼器,你可以查找DXVA2_ModeMPEG2_MOCOMP, DXVA2_ModeMPEG2_IDCT, 或許 DXVA2_ModeMPEG2_VLD。

7)當你找到一個能夠的解碼設備GUID,把GUID傳給IDirectXVideoDecoderService::GetDecoderRenderTargets辦法。這個辦法前往一個渲染器目的格式數組,指定為D3DFORMAT 格式(原文:This method returns an array of render target formats, specified as D3DFORMAT values.)。

8)循環查找到婚配你的輸入格式的渲染器目的格式。特別地,一個解碼器只支持一個渲染目的格式。解碼器將用這個子類型與渲染器銜接。In the first call to CompleteConnect(不懂,不知道怎樣翻譯,大約CompleteConnect是個什麼函數),解碼器可以決議渲染目的格式,然後前往這個格式作為一個首選的輸入類型。

9)調用IDirectXVideoDecoderService::GetDecoderConfigurations。傳入相反的解碼設備GUID,以及描繪預期格式的DXVA2_VideoDesc構造。這個辦法前往一個DXVA2_ConfigPictureDecode構造的數組。每個構造描繪一個能夠的解碼器設備配置。

10)假定以上步驟都成功了,保管Direct3D 設備句柄、解碼器設備GUID和所配置的構造(原文:and the configuration structure)。過濾器將用這個信息去創立解碼器設備。

以下代碼展現如何查找一個解碼器設備:

HRESULT CDecoder::ConfigureDXVA2(IPin *pPin)
{
    UINT    cDecoderGuids = 0;
    BOOL    bFoundDXVA2Configuration = FALSE;
    GUID    guidDecoder = GUID_NULL;

    DXVA2_ConfigPictureDecode config;
    ZeroMemory(&config, sizeof(config));

    // Variables that follow must be cleaned up at the end.

    IMFGetService               *pGetService = NULL;
    IDirect3DDeviceManager9     *pDeviceManager = NULL;
    IDirectXVideoDecoderService *pDecoderService = NULL;

    GUID   *pDecoderGuids = NULL; // size = cDecoderGuids
    HANDLE hDevice = INVALID_HANDLE_VALUE;

    // Query the pin for IMFGetService.
    HRESULT hr = pPin->QueryInterface(IID_PPV_ARGS(&pGetService));

    // Get the Direct3D device manager.
    if (SUCCEEDED(hr))
    {
        hr = pGetService->GetService(

            MR_VIDEO_ACCELERATION_SERVICE,
            IID_PPV_ARGS(&pDeviceManager)
            );
    }

    // Open a new device handle.
    if (SUCCEEDED(hr))
    {
        hr = pDeviceManager->OpenDeviceHandle(&hDevice);
    } 

    // Get the video decoder service.
    if (SUCCEEDED(hr))
    {
        hr = pDeviceManager->GetVideoService(
            hDevice, IID_PPV_ARGS(&pDecoderService));
    }

    // Get the decoder GUIDs.
    if (SUCCEEDED(hr))
    {
        hr = pDecoderService->GetDecoderDeviceGuids(
            &cDecoderGuids, &pDecoderGuids);
    }

    if (SUCCEEDED(hr))
    {
        // Look for the decoder GUIDs we want.
        for (UINT iGuid = 0; iGuid < cDecoderGuids; iGuid++)
        {
            // Do we support this mode?
            if (!IsSupportedDecoderMode(pDecoderGuids[iGuid]))
            {
                continue;
            }

            // Find a configuration that we support. 
            hr = FindDecoderConfiguration(pDecoderService, pDecoderGuids[iGuid],
                &config, &bFoundDXVA2Configuration);
            if (FAILED(hr))
            {
                break;
            }

            if (bFoundDXVA2Configuration)
            {
                // Found a good configuration. Save the GUID and exit the loop.
                guidDecoder = pDecoderGuids[iGuid];
                break;
            }
        }
    }

    if (!bFoundDXVA2Configuration)
    {
        hr = E_FAIL; // Unable to find a configuration.
    }

    if (SUCCEEDED(hr))
    {
        // Store the things we will need later.

        SafeRelease(&m_pDecoderService);
        m_pDecoderService = pDecoderService;
        m_pDecoderService->AddRef();

        m_DecoderConfig = config;
        m_DecoderGuid = guidDecoder;
        m_hDevice = hDevice;
    }

    if (FAILED(hr))
    {
        if (hDevice != INVALID_HANDLE_VALUE)
        {
            pDeviceManager->CloseDeviceHandle(hDevice);
        }
    }

    SafeRelease(&pGetService);
    SafeRelease(&pDeviceManager);
    SafeRelease(&pDecoderService);
    return hr;
}
HRESULT CDecoder::FindDecoderConfiguration(
    /* [in] */  IDirectXVideoDecoderService *pDecoderService,
    /* [in] */  const GUID& guidDecoder, 
    /* [out] */ DXVA2_ConfigPictureDecode *pSelectedConfig,
    /* [out] */ BOOL *pbFoundDXVA2Configuration
    )
{
    HRESULT hr = S_OK;
    UINT cFormats = 0;
    UINT cConfigurations = 0;

    D3DFORMAT                   *pFormats = NULL;     // size = cFormats
    DXVA2_ConfigPictureDecode   *pConfig = NULL;      // size = cConfigurations

    // Find the valid render target formats for this decoder GUID.
    hr = pDecoderService->GetDecoderRenderTargets(
        guidDecoder,
        &cFormats,
        &pFormats
        );

    if (SUCCEEDED(hr))
    {
        // Look for a format that matches our output format.
        for (UINT iFormat = 0; iFormat < cFormats;  iFormat++)
        {
            if (pFormats[iFormat] != (D3DFORMAT)m_fccOutputFormat)
            {
                continue;
            }

            // Fill in the video description. Set the width, height, format, 
            // and frame rate.
            DXVA2_VideoDesc videoDesc = {0};

            FillInVideoDescription(&videoDesc); // Private helper function.
            videoDesc.Format = pFormats[iFormat];

            // Get the available configurations.
            hr = pDecoderService->GetDecoderConfigurations(
                guidDecoder,
                &videoDesc,
                NULL, // Reserved.
                &cConfigurations,
                &pConfig
                );

            if (FAILED(hr))
            {
                break;
            }

            // Find a supported configuration.
            for (UINT iConfig = 0; iConfig < cConfigurations; iConfig++)
            {
                if (IsSupportedDecoderConfig(pConfig[iConfig]))
                {
                    // This configuration is good.
                    *pbFoundDXVA2Configuration = TRUE;
                    *pSelectedConfig = pConfig[iConfig];
                    break;
                }
            }

            CoTaskMemFree(pConfig);
            break;

        } // End of formats loop.
    }

    CoTaskMemFree(pFormats);

    // Note: It is possible to return S_OK without finding a configuration.
    return hr;
}

由於這是個通用的例子,所以有些邏輯就放置在了輔佐函數外面,需求由解碼器來完成。以下是所用到的輔佐函數:

// Returns TRUE if the decoder supports a given decoding mode.
BOOL IsSupportedDecoderMode(const GUID& mode);

// Returns TRUE if the decoder supports a given decoding configuration.
BOOL IsSupportedDecoderConfig(const DXVA2_ConfigPictureDecode& config);

// Fills in a DXVA2_VideoDesc structure based on the input format.
void FillInVideoDescription(DXVA2_VideoDesc *pDesc);

4.告訴視頻渲染器(Notifying the Video Renderer)

假如解碼器找到理解碼配置,下一步就是告訴視頻渲染器將要運用硬件減速來解碼。你可以在CompleteConnect辦法中完成這個步驟。這一步必需在選擇分配器之前做,由於它會影響分配器如何選擇。

1)為IMFGetService接口查詢渲染器的輸出Pin(原文:Query the renderer's input pin for the IMFGetService interface.)

2)調用IMFGetService::GetService獲取指向IDirectXVideoMemoryConfiguration接口的指針。該服務的GUID是MR_VIDEO_ACCELERATION_SERVICE。

3)循環調用IDirectXVideoMemoryConfiguration::GetAvailableSurfaceTypeByIndex,從0增長dwTypeIndex 變量。當該辦法在pdwType 參數前往DXVA2_SurfaceType_DecoderRenderTarget 時中止循環。這一步確保視頻渲染器支持硬件減速轉碼。關於EVR過濾器而言這一步總是成功的。

4)假如上一步成功,用DXVA2_SurfaceType_DecoderRenderTarget參數調用IDirectXVideoMemoryConfiguration::SetSurfaceType。用這個參數調用SetSurfaceType將視頻渲染器置於DXVA形式。當視頻渲染器處於這種形式時,解碼器必需提供它自己的分配器。

以下代碼展現如何告訴視頻渲染器:

HRESULT CDecoder::SetEVRForDXVA2(IPin *pPin)
{
    HRESULT hr = S_OK;

    IMFGetService                       *pGetService = NULL;
    IDirectXVideoMemoryConfiguration    *pVideoConfig = NULL;

    // Query the pin for IMFGetService.
    hr = pPin->QueryInterface(__uuidof(IMFGetService), (void**)&pGetService);

    // Get the IDirectXVideoMemoryConfiguration interface.
    if (SUCCEEDED(hr))
    {
        hr = pGetService->GetService(
            MR_VIDEO_ACCELERATION_SERVICE, IID_PPV_ARGS(&pVideoConfig));
    }

    // Notify the EVR. 
    if (SUCCEEDED(hr))
    {
        DXVA2_SurfaceType surfaceType;

        for (DWORD iTypeIndex = 0; ; iTypeIndex++)
        {
            hr = pVideoConfig->GetAvailableSurfaceTypeByIndex(iTypeIndex, &surfaceType);
            
            if (FAILED(hr))
            {
                break;
            }

            if (surfaceType == DXVA2_SurfaceType_DecoderRenderTarget)
            {
                hr = pVideoConfig->SetSurfaceType(DXVA2_SurfaceType_DecoderRenderTarget);
                break;
            }
        }
    }

    SafeRelease(&pGetService);
    SafeRelease(&pVideoConfig);

    return hr;
}

假如解碼器找到了無效的配置並成功告訴了視頻渲染器,解碼器就可以用DXVA來解碼了。解碼器必需給輸入Pin完成客戶分配器(原為:a custom allocator),如上面一局部描繪的。

5.分配解碼數據緩存(Allocating Uncompressed Buffers)

    在DXVA 2.0中,解碼器擔任分配作為解緊縮視頻數據緩存的Direct3D surfaces。因而,解碼器必需完成一個創立surfaces的custom allocator(不知道怎樣翻譯,不翻譯了,意思大約是由用戶來完成的分配器)。這個分配器提供的media samples會有一個指向Direct3D surfaces的指針。EVR經過調用這個media sample的IMFGetService::GetService取回這個指向surface的指針。這個服務的標識符是MR_BUFFER_SERVICE。

要完成custom allocator,需執行以下步驟:

1)給media samples定義一個類。這個類承繼自CMediaSample。在這個類中,做以下:

    a)保管一個指向the Direct3D surface的指針;b)完成IMFGetService接口。在GetService辦法中,假如service GUID i是MR_BUFFER_SERVICE,query the Direct3D surface for the requested interface。否則,GetService 會前往MF_E_UNSUPPORTED_SERVICE。c)重寫CMediaSample::GetPointer 辦法來前往 E_NOTIMPL.

2)給the allocator定義一個類。the allocator可以承繼自CBaseAllocator類。在這個類中,做以下:

    a)重寫CBaseAllocator::Alloc辦法。在這個辦法中,調用IDirectXVideoAccelerationService::CreateSurface創立surface。( IDirectXVideoDecoderService 接口從IDirectXVideoAccelerationService承繼這個辦法)。b)重寫CBaseAllocator::Free辦法釋放surface。

3)在你的過濾器的輸入Pin中,重寫CBaseOutputPin::InitAllocator辦法。在這個辦法中,創立一個你完成的custom allocator的實例。

4)在你的filter中,完成CTransformFilter::DecideBufferSize辦法。pProperties 參數標明EVR所需的surface的數量。把這個值添加的解碼器所需的大小,並在allocator中調用IMemAllocator::SetProperties。

以下代碼展現如何完成media sample類:

class CDecoderSample : public CMediaSample, public IMFGetService
{
    friend class CDecoderAllocator;

public:

    CDecoderSample(CDecoderAllocator *pAlloc, HRESULT *phr)
        : CMediaSample(NAME("DecoderSample"), (CBaseAllocator*)pAlloc, phr, NULL, 0),
          m_pSurface(NULL),
          m_dwSurfaceId(0)
    { 
    }

    // Note: CMediaSample does not derive from CUnknown, so we cannot use the
    //       DECLARE_IUNKNOWN macro that is used by most of the filter classes.

    STDMETHODIMP QueryInterface(REFIID riid, void **ppv)
    {
        CheckPointer(ppv, E_POINTER);

        if (riid == IID_IMFGetService)
        {
            *ppv = static_cast<IMFGetService*>(this);
            AddRef();
            return S_OK;
        }
        else
        {
            return CMediaSample::QueryInterface(riid, ppv);
        }
    }
    STDMETHODIMP_(ULONG) AddRef()
    {
        return CMediaSample::AddRef();
    }

    STDMETHODIMP_(ULONG) Release()
    {
        // Return a temporary variable for thread safety.
        ULONG cRef = CMediaSample::Release();
        return cRef;
    }

    // IMFGetService::GetService
    STDMETHODIMP GetService(REFGUID guidService, REFIID riid, LPVOID *ppv)
    {
        if (guidService != MR_BUFFER_SERVICE)
        {
            return MF_E_UNSUPPORTED_SERVICE;
        }
        else if (m_pSurface == NULL)
        {
            return E_NOINTERFACE;
        }
        else
        {
            return m_pSurface->QueryInterface(riid, ppv);
        }
    }

    // Override GetPointer because this class does not manage a system memory buffer.
    // The EVR uses the MR_BUFFER_SERVICE service to get the Direct3D surface.
    STDMETHODIMP GetPointer(BYTE ** ppBuffer)
    {
        return E_NOTIMPL;
    }

private:

    // Sets the pointer to the Direct3D surface. 
    void SetSurface(DWORD surfaceId, IDirect3DSurface9 *pSurf)
    {
        SafeRelease(&m_pSurface);

        m_pSurface = pSurf;
        if (m_pSurface)
        {
            m_pSurface->AddRef();
        }

        m_dwSurfaceId = surfaceId;
    }

    IDirect3DSurface9   *m_pSurface;
    DWORD               m_dwSurfaceId;
};

以下代碼展現如何在allocator中完成Alloc辦法

HRESULT CDecoderAllocator::Alloc()
{
    CAutoLock lock(this);

    HRESULT hr = S_OK;

    if (m_pDXVA2Service == NULL)
    {
        return E_UNEXPECTED;
    }

    hr = CBaseAllocator::Alloc();

    // If the requirements have not changed, do not reallocate.
    if (hr == S_FALSE)
    {
        return S_OK;
    }

    if (SUCCEEDED(hr))
    {
        // Free the old resources.
        Free();

        // Allocate a new array of pointers.
        m_ppRTSurfaceArray = new (std::nothrow) IDirect3DSurface9*[m_lCount];
        if (m_ppRTSurfaceArray == NULL)
        {
            hr = E_OUTOFMEMORY;
        }
        else
        {
            ZeroMemory(m_ppRTSurfaceArray, sizeof(IDirect3DSurface9*) * m_lCount);
        }
    }

    // Allocate the surfaces.
    if (SUCCEEDED(hr))
    {
        hr = m_pDXVA2Service->CreateSurface(
            m_dwWidth,
            m_dwHeight,
            m_lCount - 1,
            (D3DFORMAT)m_dwFormat,
            D3DPOOL_DEFAULT,
            0,
            DXVA2_VideoDecoderRenderTarget,
            m_ppRTSurfaceArray,
            NULL
            );
    }

    if (SUCCEEDED(hr))
    {
        for (m_lAllocated = 0; m_lAllocated < m_lCount; m_lAllocated++)
        {
            CDecoderSample *pSample = new (std::nothrow) CDecoderSample(this, &hr);

            if (pSample == NULL)
            {
                hr = E_OUTOFMEMORY;
                break;
            }
            if (FAILED(hr))
            {
                break;
            }
            // Assign the Direct3D surface pointer and the index.
            pSample->SetSurface(m_lAllocated, m_ppRTSurfaceArray[m_lAllocated]);

            // Add to the sample list.
            m_lFree.Add(pSample);
        }
    }

    if (SUCCEEDED(hr))
    {
        m_bChanged = FALSE;
    }
    return hr;
}

以下代碼是Free辦法:

void CDecoderAllocator::Free()
{
    CMediaSample *pSample = NULL;

    do
    {
        pSample = m_lFree.RemoveHead();
        if (pSample)
        {
            delete pSample;
        }
    } while (pSample);

    if (m_ppRTSurfaceArray)
    {
        for (long i = 0; i < m_lAllocated; i++)
        {
            SafeRelease(&m_ppRTSurfaceArray[i]);
        }

        delete [] m_ppRTSurfaceArray;
    }
    m_lAllocated = 0;
}

6.解碼(Decoding)

    調用IDirectXVideoDecoderService::CreateVideoDecoder辦法創立解碼器設備,該辦法前往一個指向解碼器設備IDirectXVideoDecoder接口的指針。

    對每一幀,調用IDirect3DDeviceManager9::TestDevice來測試設備句柄。假如設備改動了,辦法將前往DXVA2_E_NEW_VIDEO_DEVICE。假如這種狀況發作,做以下:

1)調用IDirect3DDeviceManager9::CloseDeviceHandle封閉設備句柄

2)釋放IDirectXVideoDecoderService 和IDirectXVideoDecoder 指針

3)翻開一個新的設備句柄

4)確定一個新的解碼器配置,如3所述。

5)創立一個新的解碼器設備。

假定設備句柄無效,解碼進程以如下步驟任務:

1)調用IDirectXVideoDecoder::BeginFrame

2)做以下,一次或屢次:

    a)調用IDirectXVideoDecoder::GetBuffer獲取一個DXVA解碼器緩存

    b)填充緩存

    c)調用IDirectXVideoDecoder::ReleaseBuffer

3)調用IDirectXVideoDecoder::Execute對該幀執行解碼操作

   DXVA 2.0解碼操作所用數據構造與DXVA 1.0相反。

    在每一對BeginFrame/Execute的調用之間,你能夠要屢次調用GetBuffer,但每種DXVA緩存類型只能一次。假如你對同一種緩存類型調用兩次,數據將會掩蓋。

    調用Execute之後,調用IMemInputPin::Receive把該幀傳給視頻渲染器,這與軟解一樣。Receive辦法是異步的,它前往之後,解碼器可以持續解碼下一幀。顯示驅動器(display driver)阻止任何解碼命令在緩存運用時期覆寫緩存。解碼器不應該在渲染器釋放sample之前重用surface來解碼另一幀數據。當渲染器釋放sample之後,分配器把sample放回可用sample池中。要獲取下一個可用sample,調用CBaseOutputPin::GetDeliveryBuffer,它轉而調用IMemAllocator::GetBuffer(原文:which in turn calls IMemAllocator::GetBuffer)。

  1. 上一頁:
  2. 下一頁:
Copyright © 程式師世界 All Rights Reserved