I answered to myself no and then yes :-)
So, why going on reimplementing demuxing, audio decoding, audio rendering, subtitles etc...? Raspberry Pi already is a target of the https://github.com/huceke/omxplayer implementation. So, I completely got rid of my implementation and I started leveraging an adaptation of the omxplayer code to make it a library to use in a similar QML component. In a few hours, this was the result:
Unfortunately, I still have no code that I can share and also still I'm experiencing some "interruptions" during the rendering which do no seem to appear in the omxplayer, but if there is anyone wanting to take this road, the result seems really encouraging as you can see! You can start from the code I posted here if you want.
Agree with your new approach. This can take advantage of the proven OMXplayer code.
ReplyDeleteCan't wait to see your code!
Hi Luca!
ReplyDeleteThis seems to be exactly what I need. But instead of using omxplayer (standalone) I would like to use gstreamer for the underlying transport layer. With omxplayer I've recognized, that it's not that stable when it comes to multicast streams. And it seems that the main development of omxplayer is related to xbmc, because most changes in the git are now several months old.
Anyway, you already did a great job and I would like to volunteer, if you like. I'm working on an OpenELEC fork (github.com/cybin/OpenELEC.tv) dedicated to Qt-development and video playback is a must have. Please contact me on irc.freenode.net or drop me an email (spam at arachnodroid dot de).
Thanks a lot,
cybin
Hi Luca!
ReplyDeleteThanks for all your work. I've tested your examples of using EGLImage and all this is working well. Now I want to move to be able to play ful-fledged video on EGLImage (texture). When I delve into the omxplayer i've seen that there is a lot of work already done and I'm not fancy to do it again too. So my idea is run the omxplayer in another process than my opengl app, control it thrue pipe or shared mem and give it the parameters to be able to render to the texture using on my opengl app.
I've read that you partially succeed in it. I'm on the beginning of this process and I would be happy if it possible to share your changes to the omxplayer code.
Thanks,
Marty
No, that is not what I've done. What you describe is a solution which does not entirely integrate video in the OpenGL application. I read of a similar solution in the RPi forum, you might want to have a look there. I've modified the omxplayer code to render the video on a texture which can then be processed by the GPU. I tested with a couple of movies (mov and mkv) and it seems to be fine. Code might be available in a near future.
DeleteThat's great! I am really excited about it and I am look forward to see your implementation. If you need any help or tests I would be pleased to participate on it.
DeleteJust two notes about your previous examples (PiOmxTextures):
There is missing of free the memory for texture (GLubyte *pixel) after you allocate it with the "new" keyword and use it in glTexImage2D.
For performance issue - this allocating of the memory for texture, gray it, transfer it to texture, and free it can be replaced with just glTexImage(....., NULL) and instead of real pointer pass it NULL.This lets the graphics card to allocate the memory itself.
I have also one question. Can be the function doLoadTextureFromImage split into two parts one (called just in init) responsible for getting the handles for components, get the port indexes, set the inputs port and move the components to IDLE state and second one which will just convert referenced images and then move the components back to idle state?
Thanks,
Marty
Hi Luca,
ReplyDeleteI have implemented the same setup as you blogged about earlier, rendering from the video_decode to the egl_render component, and using that in a custom QQuickItem.
My problem however, is that at 1080p resolution (using the h264 bunny demo video) I do not achieve the full 25fps needed for smooth playback, it hovers around 18-20 fps. Your blog post/video however seems to claim 1080p works smoothly for you?
After the setup there is basically no code running except a loop between the callback FillBufferDone and immediately calling OMX_FillThisBuffer again. (there is no data in the buffer, it is just to indicate frames being rendered). It suggests that either egl_render can not handle 25fps 1080p, or that I am using suboptimal settings.
Are there any special tricks you've used? Is there a more efficient color mode between the components? A way to optimize the egl_render output? Do you use more than 1 output buffer (EGLImage) ? A different Qt framebuffer setup? Different OpenGL settings? (ie anything not default)
My pipeline is video_decode->video_scheduler->resize->egl_render with a clock attached to the video_scheduler. Similar to the hello_video pipeline. If I use the resize component to scale down the output video (and thus need a smaller egl texture) the framerate goes up, which suggests that the problem is not in the supply of the video data, but only in the egl_render code.
Thank you for any hints,
Dennis
I'm sorry, I never profiled the frame rate. What you see in the video is what I got. I had troubles with the omxplayer integration with 1080p high profile h264 mkv, but that was something different.
DeleteI look forward to see your latest code. I am mainly interested in playing non-HD videos. If you cannot or don’t want to release your code, could you please tell me the average CPU usage of RPi when playing 480p video (e.g. bigbuckbunny) with your setup. I appreciate your help.
ReplyDeleteAndrei
Thanks for given that is doubtful information!
ReplyDeleteHello Luca
ReplyDeleteComponent is awesome, but I have a little problem. After some random time of getting stream from multicast the audio is going properly but video stops. In my opinion something is blocking the main thread, i'm trying to work out this case, but maybe you have a simple idea? I will be thankful for help.
Sorry, I've never tested with multicast channels. Is the same happening with omxplayer? I'd check whether the update procedure of the video surface component is invoked with the correct frequency. If it is but still the video is stuck with audio running, I'd check if ffpeg is getting video packets or not. If it is... well, you'll have to go down the pipeline with patience.
DeleteWhat I figured out:
DeleteAfter video lock OMX_VideoSurfaceElement::updatePaintNode is never called again.
If I add the timerEvent to OMX_VideoSurfaceElement and startTimer is called then after video lock also the timerEvent is not called again.
I'm trying now to send the MediaProcessor to separate thread, because when the opening video using OMX is called everything stops (i have InfraRed, Serial Port and several GPIO ports connected and I cannot change any value while opening video is in progress), maybe this is the issue that will help.
Of course the object still remains open, because adding a simple qDebug() to update() slot
Deletevoid OMX_VideoSurfaceElement::update()
{
qDebug() << "TEST";
QQuickItem::update();
}
writes a TEST on screen but it feels like QQuickItem::update() is waiting for a better time to start :) after locking the video.
The MediaProcessor already lives in a different thread. Can you reproduce this with a small sample application?
Deletevlc runs like this:
ReplyDelete/usr/local/bin/vlc \
-I "dummy" dvb-t:// \
--dvb-adapter=0 \
--dvb-frequency=578000000 \
--ttl 20 \
--ts-es-id-pid \
--programs=3,4,5,6,23,24,25,26 \
--sout '#duplicate{dst=std{access=udp,mux=ts,dst=224.1.1.1},select="es=102,es=103",dst=std{access=udp,mux=ts,dst=239.255.12.4},select="es=202,es=203",dst=std{access=udp,mux=ts,dst=239.255.12.5},select="es=302,es=303",dst=std{access=udp,mux=ts,dst=239.255.12.6},select="es=402,es=403",dst=std{access=udp,mux=ts,dst=239.255.12.23},select="es=502,es=503",dst=std{access=udp,mux=ts,dst=239.255.12.24},select="es=602,es=603",dst=std{access=udp,mux=ts,dst=239.255.12.25},select="es=702,es=703",dst=std{access=udp,mux=ts,dst=239.255.12.26},select="es=802,es=803"}'
it's vlc 2.0.5 - the signal is from our DVB-T via MPEG-TS. Polish DVB-T runs on MPEG4 and EAC3 or MP2 (i selected MP2 format as audio) - this is on broadcasting server.
on raspberry I have Qt 5.0.2 and I even stopped all my modules.
main is almost copied from your main.cpp
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
av_register_all();
CLog::Init("./");
qRegisterMetaType("GLuint");
qRegisterMetaType("OMX_TextureData*");
qmlRegisterType("com.luke.qml", 1, 0, "OMXImage");
qmlRegisterType("com.luke.qml", 1, 0, "OMXVideoSurface");
qmlRegisterType("com.luke.qml", 1, 0, "OMXCameraSurface");
qmlRegisterType("com.luke.qml", 1, 0, "OMXMediaProcessor");
viewer.setMainQmlFile(QStringLiteral("qmlF/TV/main.qml"));
viewer.showFullScreen();
return a.exec();
}
and the main.qml is
import QtQuick 2.0
import com.luke.qml 1.0
Rectangle
{
OMXMediaProcessor {
id: mediaProcessor
source: "udp://224.1.1.1:1234"
volume: 500
}
OMXVideoSurface {
id: omxVideoSurface
anchors
{
fill: parent
}
source: mediaProcessor
}
}
OMX_VideoSurfaceElement is not refreshing (the main thread is also not working, you can check it when you add a Timer to qml) until the OMX_MediaProcessor is fully initialized -
ReplyDeletem_omx_reader.Open(filename.toStdString(), true))
stops until the stream is detected and metadata is decoded.
After it goes for media decoding method, the refreshing of OMXVideoSurface runs. The random video stop has been without any modification of updatePaintNode.
after modyfing the method to this:
QSGNode* OMX_VideoSurfaceElement::updatePaintNode(QSGNode* oldNode, UpdatePaintNodeData*)
{
QSGGeometryNode* node = static_cast(oldNode);
QSGGeometry* geometry = 0;
if (!node) {
// Create the node.
node = new QSGGeometryNode;
geometry = new QSGGeometry(QSGGeometry::defaultAttributes_TexturedPoint2D(), 4);
geometry->setDrawingMode(GL_TRIANGLE_STRIP);
node->setGeometry(geometry);
node->setFlag(QSGNode::OwnsGeometry);
// TODO: Who is freeing the material?
QSGOpaqueTextureMaterial* material = new QSGOpaqueTextureMaterial;
material->setTexture(m_sgtexture);
node->setMaterial(material);
node->setFlag(QSGNode::OwnsMaterial);
QRectF bounds = boundingRect();
QSGGeometry::TexturedPoint2D* vertices = geometry->vertexDataAsTexturedPoint2D();
vertices[0].set(bounds.x(), bounds.y() + bounds.height(), 0.0f, 0.0f);
vertices[1].set(bounds.x() + bounds.width(), bounds.y() + bounds.height(), 1.0f, 0.0f);
vertices[2].set(bounds.x(), bounds.y(), 0.0f, 1.0f);
vertices[3].set(bounds.x() + bounds.width(), bounds.y(), 1.0f, 1.0f);
}
if ((GLuint)m_sgtexture->textureId() != m_textureId) {
QMutexLocker locker(&m_mutexTexture);
m_sgtexture->setTexture(m_textureId, m_textureSize);
}
return node;
}
it runs about 2h (without several minutes).. I made the TV/VOD software, but this is the only (and main) issue that makes the soft impossible to work :)
I passed the magic 2 hours... keep your fingers crossed. If the video at 17:00 will be still OK i will send the solution here.
DeleteVideo still works.
ReplyDeleteThis is the change:
QSGNode* OMX_VideoSurfaceElement::updatePaintNode(QSGNode* oldNode, UpdatePaintNodeData*)
{
QSGGeometryNode* node = static_cast(oldNode);
;
QSGGeometry* geometry = 0;
setFlag(ItemHasContents, true);
if (!node) {
// Create the node.
node = new QSGGeometryNode;
geometry = new QSGGeometry(QSGGeometry::defaultAttributes_TexturedPoint2D(), 4);
geometry->setDrawingMode(GL_TRIANGLE_STRIP);
node->setGeometry(geometry);
node->setFlag(QSGNode::OwnsGeometry);
// TODO: Who is freeing the material?
QSGOpaqueTextureMaterial* material = new QSGOpaqueTextureMaterial;
material->setTexture(m_sgtexture);
node->setMaterial(material);
node->setFlag(QSGNode::OwnsMaterial);
QRectF bounds = boundingRect();
QSGGeometry::TexturedPoint2D* vertices = geometry->vertexDataAsTexturedPoint2D();
vertices[0].set(bounds.x(), bounds.y() + bounds.height(), 0.0f, 0.0f);
vertices[1].set(bounds.x() + bounds.width(), bounds.y() + bounds.height(), 1.0f, 0.0f);
vertices[2].set(bounds.x(), bounds.y(), 0.0f, 1.0f);
vertices[3].set(bounds.x() + bounds.width(), bounds.y(), 1.0f, 1.0f);
}
if ((GLuint)m_sgtexture->textureId() != m_textureId) {
QMutexLocker locker(&m_mutexTexture);
m_sgtexture->setTexture(m_textureId, m_textureSize);
}
update();
node->markDirty(QSGNode::DirtyGeometry | QSGNode::DirtyMaterial);
return node;
}
To be honest I don't know what really made the solution, because I googled the update() in updatePaintNode in some example and as for markDirty I found that QQuickWindow checks (dirty), so I thought that if paintNode is setting material and geometry, so I have to make it dirty with these flags. Please examine the change with your vision.
anyway You saved my life with this component :) BIG THANKS
Don't exactly understand why this is making any difference, but thanks for submitting, I'll try to investigate it.
Delete