Published on 2022-12-31 in
Windows
So exciting that even after 13 years of introducing to Qt by my great teacher, Mani Monajjemi there’s still a lot to learn.
I was trying to use WinRT with Qt today and now after so long time sticking to MinGW, I’m switching to MSVC. Here are the 4 reasons
- MinGW is opensource but deep down if you are in Win32, their compiler always offers better API compatibility
- WinRT is available only on MSVC
- If you ever get around to some dll that simply doesn’t work with your project, it’s because MinGW and MSVC ABI are not the same. and probably that DLL was compiled by MSVC not MinGW. Same OS and still a different ABI, sounds too Windowsy to me
- Because you are on Windows, show some support to the closed-source community!
Published on 2022-07-31 in
Windows
• CoInitialize: |
Initializes the COM library for use by the calling thread, sets the thread’s concurrency model, and creates a new apartment |
• CoInitializeEx: |
More advanced version CoInitialize that specify the thread’s concurrency model |
• CoUninitialize: |
Should be called on deconstructor |
Published on 2022-07-08 in
Linux
Here I list cool bash tricks I learned:
– Bash Heredoc
Published on 2022-05-06 in
Linux
Ok the title is a bit long but why google create such a nice debug interface and make it so difficult to access it.
1. open chrome with remote debug enabled
chromium --remote-debugging-port=9222 https://github.com/
2. Install websocat to create websocket to chrome
sudo pacman -S websocat
3. Find magic chrome ws url. To do that visit following url
http://127.0.0.1:9222/json/list
4. Connect to the websocket
websocat ws://127.0.0.1:9222/devtools/browser/<GUID>
5. Execute magic command. Here just scrolling the page
{"id": 1, "method": "Runtime.evaluate", "params": {"expression": "document.documentElement.scrollTop = 600"}}
Few Notes
- Websocket URL directly from chrome(stdout) don’t address any target page. Stick to
http://127.0.0.1:9222/json/list
or see cdp tutorial for further information.
- For automated command execution in debug session you can use following scripting
chrome_loop.sh
inotifywait -q -m -e close_write cmd |
while read -r filename event; do
cat cmd | websocat -n1 ws://127.0.0.1:9222/devtools/page/<GUID>
done
cmd
{"id": 1, "method": "Runtime.evaluate" , "params": {"expression": "alert('hi')"}}
Published on 2022-04-30 in
Speech Recognition
To calculate word level confidence score Kaldi uses a method called MBR Decoding. MBR Decoding is a decoding process that minimize word level error rate (instead of minimizing the whole utterance cost) to calculate the result. This may not give the accurate result but can be use to calculate the confidence score up to some level. Just don’t expect too much as the performance is not well-accurate.
Here are some key concepts:
1. Levenshtein Distance: Levenshtein Distance or Edit Distance compute difference between two sentences. It computes how many words are different between the two. Lets say X and Y are two word sequence shown below. The Levenshtein distance would be 3 where Ɛ represent empty word

To calculate the Levenshtein distance you can use following recursive algorithm where A and B are word sequence with length of N+1

As in all recursive algorithm to decrease amount of duplicate computation Kaldi used the memoization technique and store the above three circumstances in a1, a2 and a3 respectively
2. Forward-Backward Algorithm: Lets say you want to calculate the probability of seeing a waveform(or MFCC features) given a path in a lattice (or on HHM FST). Then the Forward-Backward Algorithm is nothing more than a optimized way to compute this probability.

3. Gamma Calculation: TBA
4. MBR Decoding: TBA
Published on 2022-04-13 in
Speech Recognition
Delta-Delta feature is proposed in 1986 by S. Furui and Hermann Ney in 1990. It’s simply add first and second derivative of cepstrum to the feature vector. By doing that they say it can capture spectral dynamics and improve overall accuracy.
The only problem is that in a discrete signal space getting derivative from the signal increase spontaneous noise level so instead of simple first and second order derivative HTK proposed a differentiation filter
. This filter basically is a convoluted low-pass filter on top of discrete signal derivative to smooth out the result and remove unwanted noises. In Fig 1 you can see the result of simple second derivative vs the proposed differentiation filter
.

Fig. 1. Plain dervative VS differentiation filter. courtesy of Matlab™
HTK filter for a Delta-Delta feature (order=2, window=2) is a 9 element FIR filter with following coefficient(Θ is window size which is 2 in HTK)

• Reverberation: |
Is the effect of sound bouncing the walls and getting back in a room. The time is roughly between 1 and 2 second in an ordinary room. You can use Sabine equation to do more accurate calculation. |
- I’m thinking about using using
compressor filter
in speech instead of CMVN
to normalize in real-time.
- The above formula c_t makes no sense. I will update this as soon as I can post codes here
IEEE ICASSP ’86 – Isolated Word Recognition Based on Emphasized Spectral Dynamics
IEEE ICASSP ’90 – Experiments on mixture-density phoneme-modelling for 1000-word DARPA task
Desh Raj Blog – Award-winning classic papers in ML and NLP
Published on 2022-03-29 in
Speech Recognition
• Lattices: |
Are a graph containing states (nodes) and arcs (edges). which each state represent one 10ms frame |
• Arcs: |
Are start from one state to another state. Each state arcs can be accessed with arc iterator and arcs only retain their next state. each arcs have weight and input and output label. |
• States: |
Are simple decimal number starting from lat.Start() . and goes up to lat.NumStates() . Most of the time start is 0 |
• Topological Sort: |
An FST is topological sorted if the FST can be laid out on a horizontal axis and no arc direction would be from right to left |
• Note 1: |
You can get max state with lat.NumStates() |
• Note 2: |
You can prune lattices by creating dead end path. Dead end path is a path that’s not get end up to the final state. After that fst::connect will trim the FST and get rid of these dead paths |

Fig. 1. Topologically Sorted Graph
Published on 2022-02-18 in
Speech Recognition
• Token: |
Are same as state. They have costs |
• FrameToks: |
A link list that contain all tokens in a single frame |
• Adaptive Beam: |
Used in pruning before creating lattice and through decoding |
• NEmitting Tokens: |
Non Emitting Tokens or NEmitting Tokens are tokens that generate from emitting token in the same frame and have input label = 0 and have acoustic_cost = 0 |
• Emitting Tokens: |
Emitting Tokens are tokens that surpass from a frame to another frame |
Lattice Decoder In A Glance

Fig. 1. After First Emitting Nodes Process

Fig. 2. After Second Emitting Nodes Process
Published on 2022-01-06 in
Speech Recognition
Published on 2021-11-29 in
Speech Recognition
• Costs: |
Are Log Negative Probability, so a higher cost means lower probability. |
• Frame: |
Each 10ms of audio that using MFCC turned into a fixed size vector called a frame. |
• Beam: |
Cutoff would be Best Cost –Beam (Around 10 to 16) |
• Cutoff: |
The maximum cost that all cost higher than this value will not be processed and removed. |
• Epsilon: |
The zero label in FST are called <eps> |
• Lattices: |
Are the same as FSTs, instead each token keeps in a framed based array calledframe_toks . In This way the distance in time between each token will be perceived too. |
• Rescoring: |
A language model scoring system that applied after final state to improve final result by using stronger LM model than n-gram . |
• HCLG(FST): |
The main FST used in the decoding. The iLabel in this FST is TransitionIDs. |
• Model(MDL): |
A model that used to convert sound into acoustic cost and TransitionIDs. |
• TransitionIDs: |
A number that contain information about state and corresponding PDF id. |
• Emiting States: |
States that have pdfs associated with them and emit phoneme. In other word states that have their ilabel is not zero |
• Bakis Model: |
Is a HMM that state transitions proceed from left to right. In a Bakis HMM, no transitions go from a higher-numbered state to a lower-numbered state. |
• Max Active: |
Uses to calculate cutoff to determince maximum number of tokens that will be processed inside emitting process. |
• Graph Cost: |
is a sum of the LM cost, the (weighted) transition probabilities, and any pronunciation cost. |
• Acoustic Cost: |
Cost that is got from the decodable object. |
• Acoustic Scale: |
A floating number that multiply in all Log Likelihood (inside the decodable object). |
Fig. 1. Demonstration of Finite State Automata vs Lattices, Courtesy of Peter F. Brown
- Stanford University – Speech and Language Processing Book
- IEEE ICASSP – Partial traceback and dynamic programming