Initial State of Tail Biting Convolution Code with Feedback

2 minute read

Published:


State-Space Representation for Convolutional Code

For a convolutional code takes in $k$ input bits and generates $n$ output bits in each stage, denote the input in stage $t$ by \(\mathbf{u}_{t}=(u_t^{(1)} \ldots u_t^{(k)})^T\). Let the number of memory elements be $m$ and denote the state in stage $t$ by \(\mathbf{x}_ {t}=(x_ {t}^{(1)} \ldots x_ {t}^{(m)})^T\). Based on state space representation [1,2] for convolutional code, $\mathbf{x} _{t+1}$ can be represented:

$\begin{align} \mathbf{x} _{t+1} = A \mathbf{x}_t +B \mathbf{u}_t, \end{align}$

where $A$ is a $m\times m$ matriex and $B$ is a $m\times k$ matrix. Let the input symbol length be $L$, by recursively run (1), then \(\mathbf{x}_{L}\) can be represented as a function as initial state \(\mathbf{x}_{0}\) and input sequsences \(\mathbf{u}_{t}\), where $t=0,…,L-1$:

$\begin{align} \mathbf{x} _{L} = A^L \mathbf{x} _0 + \sum _{\tau = 0}^{L-1}A^{(L-1)-\tau}B \mathbf {u}. \end{align}$

  1. The first term of (2) can be seen as the fianl state when the initial state is \(\mathbf{x}_{0}\) and all $L$ input sequsents are zero, i.e. \(\mathbf{u}_{t}=\mathbf{0}\), \(t=0,...,T-1\). This term is called zero-input solution, denoted by \(\mathbf{x}_t^{[zi]}\).
  2. The second term of (2) can be seen as the final state when the inital state is \(\mathbf{0}\) and the input sequences are \(\mathbf{u}_{t}\), $t=0,..,L-1$. This term is called zero-state solution and denoted by \(\mathbf{x}_t^{[zs]}\).

Tail biting contraint requires \(\mathbf{x}_L=\mathbf{x}_0\) the following equation holds:

$\begin{align} (A^L+I_m)\mathbf{x}_0=\mathbf{x}_t^{[zs]}. \end{align}$

Provided \((A^L+I_m)\) is invertible, \(\mathbf{x}_0\) is given by:

$\begin{align} \mathbf{x}_0=(A^L+I_m)^{-1}\mathbf{x}_t^{[zs]}. \end{align}$

As a result, in order to generate tail biting path for \(\mathbf{u}_{t}\), $t=0,..,L-1$, repeat encoding process two times:

  1. In the first time, set \(\mathbf{x}_0=\mathbf{0}\), run encoding process and get \(\mathbf{x}_t^{[zs]}\).
  2. In the second time, set \(\mathbf{x}_0=(A^L+I_m)^{-1} \mathbf{x}_t^{[zs]}\), run encoding process and generate output.

Find \(A\)

The key point of the above algorithm is to find \(A\), the reference [2] gave methods to find \(A\). In this section, I use a more straight forward way to find \(A\).

Let us only focus 1 input encoding with $\mathbf{u}_0=\mathbf{0}$, then (1) degrades to:

$\begin{align} \mathbf{x} _{1} = A \mathbf{x}_0. \end{align}$

Hence, if the output states whose initial states are \((1,0,0)^T\), \((0,1,0)^T\), \((0,0,1)^T\) are \(\mathbf{x}^1\), \(\mathbf{x}^2\), \(\mathbf{x}^3\), then \(A=[\mathbf{x}^1\ \mathbf{x}^2\ \mathbf{x}^3]\).

Reference

  1. C. Weiss, C. Bettstetter and S. Riedel, “Code construction and decoding of parallel concatenated tail-biting codes,” in IEEE Transactions on Information Theory, vol. 47, no. 1, pp. 366-386, Jan. 2001, doi: 10.1109/18.904537.
  2. Fragouli, Christine, and Richard D. Wesel. “Convolutional codes and matrix control theory.” Proceedings of the 7th International Conference on Advances in Communications and Control, Athens, Greece. 1999.