So I’ve made a post previously about a simple dithering algorithm I made dependent on pixel brightness density over an image. I recently thought of going back to that and applying to video and here is my result!
The video itself says it all really! My data flow is:
- Read a raw AVI file into Matlab to grab individual frames
- Scale to new pixel size (in this case, 84×48)
- Apply dithering to each frame
- Pack frame pixels into the byte format required by the PCD8544 LCD
- Store all packed pixels are raw data in a *.dat file (* being the video name!) on my SD card
- Read back packed pixels into the STM32F0 frame buffer (84×48 bytes)
- Print frame buffer to PCD8544
- Delay until Xms has passed where X is dependent on screen write time and the amount to delay is equal to the reciprocal of the frame rate
- Repeat step 6 until whole file is empty
I’m pleasantly surprised at how well it actually worked! It takes a really large amount of time to apply the dithering as I’ve not really written it particularly efficiently (not efficiently at all to be quite honest…) but the most surprising part is how it actually works.
I need to learn a fair bit more about the AVI file type so I can hopefully look to writing a much more efficient version in C/C++ in the near future but for now, here is a proof of concept! It would also be better if I wrote the data to the raw files a little more efficiently or at least made my own file header to describe the data in the file.
Hopefully, once I get the Ultra watch up and running, I can view some videos off my SD card and I can find some method of playing video and audio back synchronously though the Ultra watch 1 doesn’t have the DAC wired.