Showing posts with label EGL. Show all posts
Showing posts with label EGL. Show all posts

Wednesday, April 24, 2013

Hardware Accelerated QtMultimedia Backend for Raspberry Pi and OpenGL Shaders on Video

EDIT: This article is completely outdated. Refer to newer posts for properly working builds, instructions on how to build etc... In particular: "Using POT Builds" and "Build Procedure for PiOmxTextures".

In some previous posts I developed a custom QML component to render video in a QML scene using hardware accelerated decoding capabilities and rendering without passing on the ARM side. This resulted in good performance of the Raspberry Pi even with 1080p high profile h264 videos.

Many bugs need to be fixed, code should be refactored a little but still it shows it is possible and that it works good. So I decided to move the following step: modifying Qt to make it possible to use the "standard" QtMultimedia module to access the same decoding/rendering implementation. This would make it possible to better integrate with Qt and allow users to recompile without changing anything on their implementation.

The QtMultimedia module uses gstreamer on Linux to provide multimedia capabilities: gstreamer is unfortunately not hardware accelerated on the Pi unless you use something like gst-omx.

Thus, I started to look at the QtMultimedia module sources in Qt5 and found out (as I was hoping), that the Qt guys have done, as usual, a very good job in designing the concept, providing the classic plugin structure also for multimedia backends. Unfortunately, also as usual, not much documentation is provided on how to implement a new backend, but it is not that difficult anyway by looking at the other implementations.

Design

At the end, I came up with a structure like this: I implemented a new QtMultimedia backend providing the MediaPlayer and VideoOutput minimal functionalities leveraging a "library-version" of the PiOmxTextures sample code which in turn uses a "bundled" version of omxplayer implemented using the OpenMAX texture render component as a sink for the video.

As said, Qt guys have done a good job! I didn't have to change almost nothing of the Qt implementation; all the implementation is inside the plugin (apart from a minimal modification on the texture mapping, for some reason it was upside-down and inverted).

Results

The result is pretty good, I don't see many differences from the previous custom QML component (the decoding and rendering code is the same and the QML component is implemented using the same exact principle, so nothing really changed).
I'm only beginning to play a little bit with this, I just tried a couple of things. In the video you can see the "standard" qmlvideo and qmlvideofx examples provided with the Qt sources.

How to Build

Clone the repo somewhere, then use the prepare_openmaxil_backend.sh script in tools. It will compile PiOmxTextures as a shared lib and will place everything you need in the openmaxil_backend directory. Copy that directory recursively into your Qt source tree in qt_source_tree/qtmultimedia/src/plugins naming it simply openmaxil.

Some changes are needed to the Qt tree to make it compile the new backend automatically instead of the gstreamer backend, for the texture mapping and to make the "standard" qmlvideo and qmlvideofx examples work. No real modification to the code is needed: sufficient to instantiate the QQuickView those examples use with a specific class definition. This is needed. and to provide the plugin the instance of the QQuickWindow containing the media player.
These changes can be applied with a patch to the qtmultimedia tree using the patch in the tools in git. Then build the qtmultimedia module with:

path_to_qmake/qmake "CONFIG+=raspberry"

You'll find all you need here: https://github.com/carlonluca/pi.

How to Use

After you have built the plugin, you can simply use the "standard" Qt API for MediaPlayer and VideoOutput. Only restriction is that the plugin needs to access a QQuickView to access the renderer thread of the Qt Scene Graph. This might be an issue, but I've not found another solution to this yet.

What you have to do is to simply provide your application the QQuickView by using the exact class, which must be included in your application:


class RPiQuickView
{
public:
   static QQuickView* getSingleInstance();
};

Q_DECL_EXPORT QQuickView* RPiQuickView::getSingleInstance() {
   static QQuickView instance;
   return &instance;
}


This is needed because the plugin will look for the RPiQuickView::getSingleInstance() symbol, which should be found after the dynamic linker has linked to plugin the the executable. Also, you'll need to add -rdynamic to the LFLAGS of your application, so we ensure that the linker will add the symbol to the symbol table.

This is what I added to the qmlvideo and qmlvideofx examples to make those work. This is of course not elegant, but still I couldn't find a better way in reasonable time.

Of cuorse, you'll have to copy the Qt libraries that are built to your Pi, together with libPiOmxTextures.so (unless you build it statically) and the ffmpeg libraries (do not use the ffmpeg libs you have in your Pi, it is likely those won't work; use those compiled by the compile_ffmpeg.sh script in tools.

What Remains to Be Done

Most the calls are not implemented, just the minimal to get video on the screen. Also, still the audio implementation is missing (but OMX_MediaProcessor class should be ready to play audio as well) and only the QtQuick side is taken into consideration: I've never had the time to look at the widget implementation.

In case you find bugs, try to report an issue on github. If I'll find the time I'll answer.

Edit 6.25.2013

Instantiation of the QQuickView using the RPiQuickView class is no more needed from 30e24106c5dd7a5998d49d7093baef49f332b1d2. I tested this revision with Qt 5.1.1 and everything seems to keep working correctly.

Sunday, March 3, 2013

Bring up Qt 5 on Raspberry Pi with Wayland

Ok, I've been waiting to do this for quite some time but never had the time to actually do it. I tried this quickly twice but without success because of many issues. Now I invested some hours and made it to the end of the journey :-)
I therefore try to describe here the steps to make Qt 5.0.1 (the current version in the Qt git) on the new wheezy image with Wayland support.

Building the Qt Fundamental Modules

Of course the procedure is almost identical to the one used for Qt 5.0 that I described here. I only did a couple of things to speed up the process, you choose how to do it. I briefly describe here some of the steps.

  1. Download the latest available wheezy image from the Raspberry Pi website: http://www.raspberrypi.org/downloads.
  2. Uncompress the image and flash it to your SD card.
  3. Boot the new image on your board.
  4. Install some libs that we will need (I won't compile the xcb platform plugin here, so in this case I decided not to install those xcb libs):

    $ sudo apt-get install libdbus-1-dev libudev-dev libssl-dev
    $ sudo apt-get install libasound2-dev
    $ sudo apt-get install libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libgstreamer-plugins-bad0.10-dev
    $ sudo apt-get install libffi-dev libpixman-1-dev
    $ sudo apt-get install libsqlite3-dev libicu-dev libfontconfig1-dev


    libdbus-1-dev is used to get the QtDBus module compiled from qtbase, libudev-dev to get udev support, libssl-dev for OpenSSL and libasound2-dev will provide Qt what it needs for ALSA support.
    GStreamer libs instead are mainly used in the qtmultimedia and qtwebkit modules. If the environment is setup correctly for gstreamer support, then the configure script will report the success.
    libffi-dev libpixman-1-dev are needed to compile the qtwayland module or its dependencies. libsqlite3-dev libicu-dev and libfontconfig1-dev instead are needed only if you intend to use QtWebKit.
  5. Instead of the loopback mount of the image on your system to get a correct sysroot, I quickly scp'ed the needed binaries from my board to a newly created sysroot. In particular I copied:
    • /lib
    • /usr/lib
    • /usr/include
    • /opt
    I'll refer to the directory containing all of this as rasp_sysroot. Quick and dirty. You might also consider using rsync though.
    As a final note on this I have to say that scp has the somehow pleasant collateral effect of following the symlinks in libs.
  6. Clone the qt5 git repo and start building the qtbase module:

    $ git clone git://gitorious.org/qt/qt5.git
    $ cd qt5
    $ ./init-repository
    $ cd qt5/qtbase
    $ ./configure -prefix your_qt_prefix -release -device linux-rasp-pi-g++ \
    -make libs -device-option CROSS_COMPILE=your_toolchain_path/bin/arm-linux-gnueabihf- \
    -device-option DISTRO=wheezy -sysroot your_sysroot_path -opensource \
    -confirm-license -no-pch -make examples -nomake tests
    $ make -jn
    $ sudo make install

    In my case it was done automatically by the configure script, but you might need to set the pkgconfig path before running the configuration script:

    $ export PKG_CONFIG_PATH=your_sysroot_path/usr/lib/arm-linux-gnueabihf/pkgconfig
    $ export PKG_CONFIG_LIBDIR=yout_sysroot_path/usr/lib/pkgconfig:your_sysroot_path/usr/lib/arm-linux-gnueabihf/pkgconfig
    $ export PKG_CONFIG_SYSROOT_DIR=yout_sysroot_path


  7. During compilation I got an error indicating that it was impossible to find the header "vchost_config.h". I solved by editing the file in rasp_sysroot/opt/vc/include/interface/vmcs_host/vcgencmd.h:
    33c33

    < #include "vchost_config.h"
    ---
    > #include "linux/vchost_config.h"


    This is not very elegant maybe... anyway it is sufficient. You might add an include path in the qmake.conf or similar, but it seemed good that way :-)
  8. At this point qtbase should have been successfully compiled. Now at least you should compile the qtscript, qtjsbackend and the qtdeclarative module:

    $ cd ..
    $ cd qtscript
    $ your_qt_prefix/bin/qmake
    $ make -jn
    $ sudo make install


    Repeat the same for the other two modules.

Building the QtWayland Dependencies

The xkbcommon and the wayland libraries must be cross-compiled before trying to build QtWayland. To do that first setup the environment; I used this script to do it:

export RPI_SYSROOT=your_sysroot_path
export TOOLCHAIN=your_toolchain_path
export QTDIR=your_qt_sources_dir/qtbase
export PATH=$QTDIR/bin:$TOOLCHAIN/bin:$PATH
export PREFIX=your_qt_prefix
export PKG_CONFIG_PATH="$RPI_SYSROOT/usr/lib/pkgconfig:$RPI_SYSROOT/$PREFIX/lib/pkgconfig:$RPI_SYSROOT/$PREFIX/share/pkgconfig"
export PKG_CONFIG_SYSROOT_DIR="$RPI_SYSROOT"
export PKG_CONFIG_ALLOW_SYSTEM_LIBS=1
export PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1
export CPP=$TOOLCHAIN/bin/arm-linux-gnueabihf-cpp
export CC=$TOOLCHAIN/bin/arm-linux-gnueabihf-gcc
export CXX=$TOOLCHAIN/bin/arm-linux-gnueabihf-g++
export CFLAGS="--sysroot=$RPI_SYSROOT"
export CXXFLAGS="--sysroot=$RPI_SYSROOT"
export CPPFLAGS="--sysroot=$RPI_SYSROOT"
export LD=$TOOLCHAIN/bin/arm-linux-gnueabihf-ld
export LDFLAGS="--sysroot=$RPI_SYSROOT"
export AS=$TOOLCHAIN/bin/arm-linux-gnueabihf-as
export STRIP=$TOOLCHAIN/bin/arm-linux-gnueabihf-strip
export AR=$TOOLCHAIN/bin/arm-linux-gnueabihf-ar


Start the script and download from git the xkbcommon lib:

$ source env_setup.sh
$ git clone git://people.freedesktop.org/xorg/lib/libxkbcommon.git
$ cd libxkbcommon/
$ ./autogen.sh --prefix=some_prefix --host=arm-linux-gnueabihf
$ make && make install


Copy libs and headers resulting to your_sysroot_path.
Now before compiling the wayland library the wayland scanner is needed for the generation of C code from Wayland protocols. To compile this, open a new environment for standard compilation and start compiling the wayland-scanner and place it in the PATH:

$ git clone git://anongit.freedesktop.org/wayland/wayland
$ cd wayland
$ ./autogen.sh --disable-documentation
$ make
$ cp src/wayland-scanner $QTDIR/bin


Now get back to the cross-compilation environment and compile the wayland library itself:

$ cd wayland_dir
$ git clean -dxf
$ ./autogen.sh --host=arm-linux-gnueabihf --prefix=$RPI_SYSROOT$PREFIX --disable-scanner --disable-documentation
$ make
$ sudo make install


Then, if necessary, copy the libraries into your sysroot.
Now let's build QtWayland:

$ export QT_WAYLAND_GL_CONFIG=brcm_egl
$ cd your_qt_sources_dir
$ git clone http://qt.gitorious.org/qt/qtwayland
$ cd qtwayland
$ your_qt_prefix/qmake CONFIG+=wayland-compositor
$ make
$ sudo make install


At this point I have to say I had issues during the execution of the qmake binary. Unfortunately I couldn't track down all the reasons, but it seems that libxkbcommon couldn't be found. According to the .pro file, the qtCompileTest function is used to check if config.test/xkbcommon can be built. Appearantly, the inclusion of X11/keysym.h couldn't be satisfied, and also it shouldn't be needed... but although the file can be compiled, the qtwayland.pro file still was failing, so I simlply removed the checks for xkbcommon and the rest of the build procedure succeeded.

load(configure)
qtCompileTest(wayland)
#qtCompileTest(xkbcommon)
qtCompileTest(wayland_scanner)
qtCompileTest(wayland_egl)
qtCompileTest(egl)
qtCompileTest(brcm_egl)
qtCompileTest(glx)
qtCompileTest(xcomposite)

CONFIG += config_xkbcommon

load(qt_parts)

!config_wayland {
error(QtWayland requires Wayland 1.0.0 or higher)
}

#!config_xkbcommon {
error(QtWayland requires xkbcommon 0.2.0 or higher)
}

!config_wayland_scanner {
error(QtWayland requires wayland-scanner)
}

!config_wayland_egl {
message("no wayland-egl support detected, cross-toolkit compatibility disabled");
}

Running Applications Using the QtWayland Platform Plugin

Now copy the result of the build which should now be in your sysroot into your Pi and try to run the Wayland example compositor from QtWayland:

cd your_qt_prefix/examples/qtwayland/qml-compositor
export XDG_RUNTIME_DIR=/tmp
./qml-compositor -platform eglfs


At this point the server should be running. Now open another shell and try to run any Qt application using the wayland-brcm platform plugin:

$ cd your_app_path
$ ./your_app_bin -platform wayland-brcm &


Now you should see the window on the screen.
s It is possible however that some EGL/OpenGL error occurs, like eglCreatePixmapSurface failed: 3003, global image id: 0 0, then consider increasing the memory reserved to the GPU, that is a bad_allocation error. Simply add gpu_mem=n, where n is the number of MBs to assign to the GPU in the /boot/config.txt file. Read here for more information: http://elinux.org/RPiconfig.

Building QtWebKit

For more details refer to this. It seems Qt guys have done a good work on QtWebKit. Making it work simply requires to build and run. Compile as said the qtwebkit module, then copy back the libraries to the device and load a WebView element.
The only thing that still seems to be missing is the 16bit color depth support: if you try tu run you might see a mess on the scren, that is because the QtWebProcess is writing 24bit image on 16bit mode. More details on this here.
Anyway, it seems now it is sufficient to set the framebuffer to 24bits to make it work:

$ fbset -depth 24

No need to modify the eglfs plugin anymore. The EGL configuration seems to correctly reflect the framebuffer color depth.

Building QtMultimedia

QtMultimedia is the module responsible for the multimedia content handling. For Linux, it is based on gstreamer, which is available, ad already said, for Raspberry Pi. Anyway, gstreamer relies on plugins to decode/render multimedia content, but most of those are clearly not hardware accelerated, which makes it nearly useless on an embedded platform for video playback.

Anyway, there is a plugin that is supposed to use the RPi accelerated OpenMAX libraries, gst-omx. I have to say I still have never seen it work well, so I'm not sure whether this is working or not on Pi. It is an interesting subject, but I don't if or when I'll get my hands on that.
I tried the QtMultimedia module a couple of times though, and I could play a couple of videos, but the result was clearly useless. Something like this should play the video (no audio):

import QtQuick 2.0
import QtMultimedia 5.0

Rectangle {

   width:  1920
   height: 1080

   color: "black"
 

   MediaPlayer {
      id: player
      source: "file://..."
      autoPlay: true

   }

   VideoOutput {
      id: videoOutput
      source: player
      anchors.fill: parent
   }
}


Bye!

Saturday, February 23, 2013

QML Components for Video Decoding and Rendering POC Code Available

As requested I shared the sources of the demo videos I posted recently. I tested these components with a few videos and I saw that it seems to work "reasonably well" also for 1080p h264 high profile with 5.1 audio. The current implementation uses a player class which decodes data and a surface class that renders. Rendering the video on more surfaces seems to work.

Beware that the code is not complete, it is only a proof of concept of how to implement. If you need to use it in production code, you'll have to work pretty much on it. There are many TODO's left and no testing has been run on the classes. The cleanup code must be completely rewritten and only pause/resume/stop commands are implemented at the moment. Also consider going through the relevant code for leaks, I didn't pay much attention when implementing because it was my idea to refactor, sorry.

Only 1080p resolution is currently supported, never even tried anything different, you'll probably have to look around and see where I hardcoded those values (I was in a hurry :-))
There are many unused classes in the code, I left those there only because those might be useful for new implementations.

I started to work on other things recently, so I really have few time to work on this. But still I see that many are interested, so I decided that incomplete code is better than no code. Also, I have to say I have no practical need of these components, I only worked on this as a challenge in my spare time. Now that there is no challenge anymore, I have to say I lost some interest and I'm looking for a new one :-D

This is the github URL of the repo (PiOmxTextures is the project directory):

https://github.com/carlonluca/pi

The current implementation of the OMX_MediaProcessor class uses the components implemented in the omxplayer code, with modifications to some of those. Those modified sources are placed in the omxplayer_lib directory in the project sources: I chose this architecture to make it relatively simple to merge the changes from the omxplayer sources.

How to build

To build the project, you'll need a build of the Qt libraries, version 5.0.0 at least. Instructions on how to build can be found around the web. I also wrote a quick article on that if you need it (this is the updated version for 5.0.1).

Once you have your Qt build and Qt Creator setup, you can open the .pro file. You should also have the Raspberry sysroot somewhere in your system, and that should be known by Qt Creator. The project is based on the same dependencies of omxplayer, so you need those as well. I tested this only against a specific build of ffmpeg, which is the one omxplayer was using when I last merged: to compile you can use this script, which is included in the source tree. Just running it passing the number of compilation thread to use should be sufficient:

git clone https://github.com/carlonluca/pi
cd pi/PiOmxTextures/tools
./compile_ffmpeg.sh n
cd ../../
mkdir PiOmxTextures-build-rasp-release

cd PiOmxTextures-build-rasp-release 
path_to_qmake/qmake "DEFINES+=CONFIG_APP" ../PiOmxTextures
make -jn


Pay attention that the sample application renders a file whose path is hardcoded in the qml file. Change that if you want to test the sample code.

As said, this is just a proof of concept and I do not have much time to maintain it. If you want to help to fix the code and to merge new changes, issue a merge request!
Hope this code might be of help for other open source project! Feel free to leave a comment if you have any! Bye!

Sunday, January 20, 2013

Custom QML Component for Hardware Accelerated Video Leveraging omxplayer

While working on completing the custom QML component for rendering video on Raspberry Pi, I asked myself: does it really make sense to always re-invent the wheel? And also, shouldn't the key point of open source be the sharing of code, ideas and work?
I answered to myself no and then yes :-)
So, why going on reimplementing demuxing, audio decoding, audio rendering, subtitles etc...? Raspberry Pi already is a target of the https://github.com/huceke/omxplayer implementation. So, I completely got rid of my implementation and I started leveraging an adaptation of the omxplayer code to make it a library to use in a similar QML component. In a few hours, this was the result:

Unfortunately, I still have no code that I can share and also still I'm experiencing some "interruptions" during the rendering which do no seem to appear in the omxplayer, but if there is anyone wanting to take this road, the result seems really encouraging as you can see! You can start from the code I posted here if you want.

Wednesday, December 26, 2012

Custom QML Component to Integrate Hardware Acceleration in Qt Scene Graph

This is a custom QQuickItem subclass rendering hardware-decoded h264 1080p video on Raspberry Pi. The egl_render component used in some previous posts is good. The QML component can play nicely with the rest of the scene rendered by the Qt Quick renderer in the scene graph.
Code will be available when I'll find the time to clean it up, but is a direct consequence of the previous posts which include the code to implement decoding and rendering.
In the QML test code, I created some simple standard QML animations and also placed an Image element with 0.5 opacity overlapping the video element.
NOTE: New information is available here.

Sunday, December 9, 2012

Decoding and Rendering to Texture H264 with OpenMAX on Raspberry Pi

After accomplishing the target of decoding and rendering compressed image formats onto OpenGL textures directly using OpenMAX, I've looked into how to use the same concept to render h264 stream directly into a texture using the same OpenMAX component on my Raspberry Pi.
As a starting point I took the same sample code I used in this post, and the hello_video sample code by Broadcom in VideoCore.

This is the code I wrote to make it work (PiOmxTextures_2.0.tar.bz2 version 2.0, https://github.com/carlonluca/pi): notice that this is only a collection of notes, which compiles and seems to work fine on the wheezy Raspberry Pi image, it is not a fully re-usable component. I'm still working on that.
The code is pretty messed up and much of it is not fully implemented. Error management is almost inexistent, but still it can be useful to guess how to make things work. The rest is up to you ;-)
This is a video illustrating the sample code running:

To compile the code refer to this article. You will need the Qt 5.0 libraries (version 4.0 might be sufficient) running with the eglfs plugin. Hope this helps!

Friday, December 7, 2012

Decoding and Rendering Compressed Images with OpenMAX on Raspberry Pi

After building Qt 5.0 on Raspberry Pi, I focused the attention to hardware acceleration using OpenMAX. This is quite an interesting subject to study. The most interesting element I found among those available for VideoCore is the egl_render component. This component should be able to render content coming from the output of the decoder component directly into an EGL image, which is bound to a texture.
Of much interest for me is rendering video into an texture, but I thought for the moment I could start just with decoding an image and placing it to an OpenGL texture for rendering. This can be done by using the image_decode component and the egl_render component. Refer to the documentation in the git for some more information.
Unfortunately not much documentation is available for this task, but I found something very interesting that helped me out much:
  1. This source code by Matt Ownby: I reused much of his excellent work.
  2. This project by Qt
  3. The textures example from Qt sources
The code I wrote is provided in a package below. I ran some tests by using the original way of loading textures:

QGLWidget::bindTexture(QPixmap(fileAbsolutePath, GL_TEXTURE_2D, GL_RGBA));

and the method using the egl_render OpenMAX component. These are the results I got by loading 6 jpegs 1920x1080:
    Average time out of 5 runs without OMX: 6750ms;
    Average time out of 5 runs with OMX: 918ms.
Here is the package containing the sources (version 1.0): PiOmxTextures.tar.bz2. The code links to Qt 5.0 and needs 6 images to be specified on the command line to load. You need to move the images to the same directory (no spaces) and to name those images like:

prefix{0, 1, 2, 3, 4, 5}.jpg

To compile you'll need Qt Creator or at least qmake:

cd PiOmxTextures
your_qmake
make


This is a video showing the performance loading 6 1080p jpegs, first the software decoding is used, second and third run are hardware accelerated:
Sorry the quality is really bad, but the performance can be appreciated anyway. Also it seems that for some reason the software implementation failed to load one image, you can see a black texture on the side of the cube. The same is not happening with the hardware implementation. Didn't investigate the reason.
This is not a ready-to-use code, just some notes that might be useful. I've almost never used OpenMAX or OpenGL so if you happen to have any suggestion or observation, feel free to comment! ;-)

Tuesday, November 27, 2012

Making QtWebKit with WebKit2 in Qt 5.0 work on Raspberry Pi

After compiling Qt 5.0 for Raspberry Pi, I wanted to try all the modules. I compiled and tested the simplest modules first, like:
  1. qtbase
  2. qtdeclarative
  3. qtjsbackend
  4. qtquick1
  5. qtscript
  6. qtxmlpatterns
Everything worked quite smoothly, without any change, except the patch to the v8 engine in qtjsbackend module for ARM hardfp (read here).

I then turned my attention to the QtWebKit module: I know that many changes have been applied to this module for in Qt 5.0, so it is very interesting to try it. Therefore, I started to compile it the usual way:

cd your_qt_sources/qtwebkit
your_qt_prefix/bin/qmake
make -jn
make install

Unfortunately, at the moment of writing, it didn't work for me: the module compiled and linked successfully, but crashed almost as soon as the page load procedure started. I therefore had to start searching around and asking Qt guys about this issue, and during this, I was frequently directed here. Turned out this is exactly the issue I encountered: I applied the latest patch (consider reading the entire report and check that a more stable fix is available):

cd your_qt_sources/qtwebkit
patch -p1 < ~/webkit_patch

where webkit_patch is the linked diff. So, now you can compile the module to include the differences and try again. Consider that the patch seems to be near land, so it might not be necessary for you in the future. As a first test, this is the code I tried:

#include
#include

int main(int argc, char* argv[])
{
    QApplication app(argc, argv);
    QWebView webView;
    webView.setUrl(QUrl("http://www.repubblica.it"));
    webView.show();
    return app.exec();
}


At this point, it should work. As you can see, performance is unacceptable. This is not the way an embedded browser is supposed to work indeed, this is more like classes that can be used for a desktop browser.
Next step was testing WebKit2 with tiling and all that nice stuff. So I tried to create a simple application loading a WebView QML element, like this:

main.cpp:
#include

int main(int argc, char *argv[])
{
    QGuiApplication app(argc, argv);
    QQuickView viewer;
    viewer.setSource(QUrl("/home/pi/main.qml"));
    viewer.show();
    return app.exec();
}


main.qml:
import QtQuick 2.0
import QtWebKit 3.0

WebView {
    url: "http://www.cnn.com"
    x: 100
    y: 100  
    width:  1500
    height: 900
}


When I tried this the first time, it closed with an anonymous error on QtWebProcess:

Failed to start " QtWebProcess 14"

After looking into the sources, I found this is simply a failure to find the QtWebProcess binary. It is needed because WebKit2 uses a separate process to draw the content. Placing it in /usr/bin is sufficient:

cd /usr/bin
sudo ln -s your_qt_prefix/bin/QtWebProcess QtWebProcess

Or better use a relative path. Now what you might get is... nothing at all... At least this is what I got. After some seconds a crash occurred and often nothing was drawn.
It took me some time to guess what was happening: it seems that QtWebKit is trying to draw with 32 bit color depth, while instead the rest of Qt is rendering using 16 bit color depth (especially if you're using the eglfs platform plugin) on a RGB565 EGL configuration.
According to the Qt WebKit code, it seems that the entire code is assuming 32 bit color depth is being used, and this is hardcoded instead of negotiated with the main process. As a quick fix, I made the entire EGL configuration to be 32 bit (read this).
By setting this, it is possible to make QtWebKit work in 1080p with 32 bit color depth. If I'll find the time I'll do the needed modifications to have the same configuration with the correct color depth.

Consider that, at the moment of writing, the configuration of the wheezy image does not assign sufficient memory to the GPU.You'll see therefore pages where many tiles are black: that is the result of insufficient GPU memory when trying to allocate for the texture. Just assign some more memory to the GPU editing the /boot/config.txt file of your Pi.

Sample Video



NOTE: For updated information read here.

Monday, November 26, 2012

32 bit color depth Qt on Raspberry Pi

EDIT: I see this post is still consulted very often. The solution below was the only one I found years ago. Now it seems that setting the env variable QT_QPA_EGLFS_FORCE888 to 1 works perfectly.

Some days ago I was asked to test the performance of 32 bit EGL config color depth with Qt on Raspberry Pi using the eglfs platform plugin. I tried to set the framebuffer to 32 bit color depth using the usual command:

fbset -depth 32

Nothing changed unfortunately. I also tried some other changes I was suggested but no way.
This seems due to some strange parameter passed to the q_configFromGLFormat function in the qeglconvenience.cpp source file. What I did to make Qt render 32 bits per pixel is to simply rewrite the function to always return a 32 bit EGL configuration. This is not a fix of the issue nor a good modification, it is just a hack to make it render true color. I hope I'll find the time to analyze the situation more accurately.

This is the simple modification I made:

EGLint configCount = 0;
assert(eglGetConfigs(display, 0, 0, &configCount) == EGL_TRUE);
qDebug("%d EGL configurations were found.", configCount);
EGLConfig* configs = new EGLConfig[configCount];
assert(eglGetConfigs(display, configs, configCount, &configCount) == EGL_TRUE);

// List the configurations.
for (int i = 0; i < configCount; i++) {
   EGLint redSize, blueSize, greenSize, alphaSize;
   //assert(eglGetConfigAttrib(display, *(configs + i), EGL_RED_SIZE, &redSize) == EGL_TRUE);
   //assert(eglGetConfigAttrib(display, *(configs + i), EGL_GREEN_SIZE, &greenSize) == EGL_TRUE);
   //assert(eglGetConfigAttrib(display, *(configs + i), EGL_BLUE_SIZE, &blueSize) == EGL_TRUE);
   //assert(eglGetConfigAttrib(display, *(configs + i), EGL_ALPHA_SIZE, &alphaSize) == EGL_TRUE);
   //qDebug("Config %d: RGBA(%d, %d, %d, %d).", i, redSize, greenSize, blueSize, alphaSize);

}

// Choose the first 32 bit configuration.
EGLConfig config;
int i = 0;
for (i = 0; i < configCount; i++) {
   EGLint redSize, blueSize, greenSize, alphaSize;
   assert(eglGetConfigAttrib(display, *(configs + i), EGL_RED_SIZE, &redSize) == EGL_TRUE);
   assert(eglGetConfigAttrib(display, *(configs + i), EGL_GREEN_SIZE, &greenSize) == EGL_TRUE);
   assert(eglGetConfigAttrib(display, *(configs + i), EGL_BLUE_SIZE, &blueSize) == EGL_TRUE);
   assert(eglGetConfigAttrib(display, *(configs + i), EGL_ALPHA_SIZE, &alphaSize) == EGL_TRUE);
   qDebug("Config %d: RGBA(%d, %d, %d, %d).", i, redSize, greenSize, blueSize, alphaSize);
   if (redSize == 8 && blueSize == 8 && greenSize == 8 && alphaSize == 8) {
      config = *(configs + i);
      break;
   }

}
delete[] configs;
return config;

Also, you might want to choose according to other values of the EGL configurations. Anyway, this is sufficient to achieve the result for the moment.

After this modification, you'll have to rebuild the static library this is included into, and then the eglfs plugin. So:

cd your_qt_sources/qtbase/src/platformsupport
make

this should rebuild libQt5PlatformSupport.a, then rebuild the entire eglfs plugin:

cd your_qt_sources/qtbase/src/plugins/platforms/eglfs
make clean
make -jn

then place the newly built libqeglfs.so in your Pi in the platform plugin directory.

NOTE: for updated information read here.