Erdal Arikan introduced polar codes in 2009 and the first deterministic was provided by construction of capacity by attaining the codes for binary memory less symmetric known as BMS. Arikan’s research took more than 20 years and finalized the computational cutoff rate of sequential decoding and concluding the fundamental problem in communication theory. If the sender has the information in bits to send after the appropriate processing are transmitted through the channel. The channel transformation creates the virtual channels which contains two main steps. They are channel combining and channel splitting. The original channels are converted into virtual channels for the required process. To achieve the symmetric mutual information the polar codes are enabled with the good properties [1].
Due to the low complexity required in the polar codes and also the task included with verifiable reliability securities. The binary source channel coding is the main reason in this method for accessing the applicability to the setting. By recursive transforms polarized by studying the discrete memory less process of the arbitrary alphabet sizes. When the alphabet size is prime the accomplishment can be done through the linear transform similar to Arikan. When the alphabet size is not prime the linear transform lose their capability to polarize all inactive memory less process. All memory less processes for all finite alphabet sizes is polarized by nonlinear transform. In the binary case the complexity and the error probability behavior of codes are obtained by the transformation of the polar codes. To design the encoding and decoding scheme which confirms the bits are transmitted in few uses of channel are possible by using this method [2].
The advantage of polarization effect is to construct the codes to attain the symmetrical channel capacity. To create a coding system where the user can access each coordinate channel and transmit the data which are near 0. This is the basic idea of polar coding.
By mixing some bits based on the Fourier transform is the base of the coding architecture. The algorithm of decoding computes the probability where the input bits are equal to 0 given in the previous input bits and ratio of output bits.
Through a polarizing transform can be decoded reliable bits can be obtained with a small error probability as long as the values of unreliable bits are been provided in the advance of the decoder.
Code word length n must be a Power of two, i.e. n=, Where l is a positive integer.
=F
n→ bit binary representation of integer l.
The n-bit binary representation in is a bit reversal order of i.
The matrix of polar code is defined as
→ bit reversal permutation matrix.
The polar code generated by using given below equation
Where =. → Equation 1
The equation 1 is representation is encoded bit sequence
Where =. → Equation 2
The equation 2 is representation is encoding bit sequence
The bit indexes of is divided in two subset. These are A and.
A→ information bit.
Frozen bit.
(A) denote the sub matrix of formed by the row with indices in .
→information bit
→frozen bits
Polar code using decoding is very efficient successive-cancellation decoder. The decoding complexity of O(N log N) and can achieve capacity .
N is represent very large.
The polar codes using polarization of code length increses(). the polarizing construction where N=2.
The generator matrix of .
Using N=2 in matrix . the matrix rewrite given below
N =
N = 4
The polar codes construct the length N=2.
The figure shown in polar code using length N=2
The figure shows in polar code using length N=4
The encoding two method of coding
The polar code using encoding scheme is systematic coding or non systematic coding. The construct (N, k) using polar code the N-K is represent the reliable bits. It is also called as frozen bits. The set zero and remaining k bits are using carry information. The encoding is carried polar code using frozen bits. The frozen bit indicate gray and are the K=4 information bit [3].
The figure shows non-systematic encoding carried by propagating u= using from left to right.
The polar code using length N is represent length N/2. The binary tree represent the natural polar codes in the matrix. The tree representation of the (8, 4) polar code
The figure shows matrix of decoder tree
The polar code using encoding scheme is systematic coding or non-systematic coding. The systematic encoding using polar codes design BER (Bit-Error-Rate).it is using low-complexity systematic encoding method [4]. This method is flexible to using encoding polar codes. The low complexity systematic encoding scheme using comprises two non-systematic encoding bit operation between (8, 4) polar code and N-bit vector u= (0, 0, 0,, 0,).
…→ K=4 information bit
The frozen bits are reset to 0 using encoder. The end result N bit vector given belo where
… Represent the k information bits
The polar codes are provable class of channel capacity achieving codes introduced by Arikan and noisy channel coding theorem introduced by Shannon. The standard successive cancellation decoding algorithm presented in O (N log N) decoding complexity, where N is the code length. The length of the code is 2l. But the channel capacity requires large size to achieving the block length, so e block length reduced this way and hence reduced the decoding complexity and also improve BER in many methods. In this latest method presents the successive cancellation decoding algorithm and compared for decoding complexity and also BER performance [5].
The polar codes are generated by
Where x1N is the encoded bit sequence, u1N is the encoding bit sequence. The bit indexes divided into two subsets, one sunbset is containig the information bit represented by A, and other one is represented as frozen bits Ac. The polar code is further expressed as
x1N = uA GN(A)uAc G N(Ac)
consider the polar code with parameters (N,K,A,uAc ), where N is code length, K is information length, u is the set of information bits, uAc is frozen bit.
Where thre above equation denotes the channel transition probability or likelihood probability., The S decoder over logarithm domain is more preferred for the better robust ness and the lower complexity The log likelihood probability as defined as
In each step of SC decoder, only the most likely bit decision survived. Whenever certain bit is incorrectly decoded, the corresponding decoding fails. The error propagation deals with an improved SC decoder algorithm named as SCL decoder. The SCL decoder can be represented as a breadth first search (BFS) version of SC decoder. In order to K-best detection for multi-input multi-output (MIMO) systems, the SCL decoder expands and selects paths level-by level on the full binary-tree. At each level, SCL decoder expands paths and computes path metrics, then selects the l paths with largest metrics instead of only keeping the best path. In addition, the list of l candidate paths
Folded SC decoder
SC is a suboptimal decoder so it has the quasi linear complexity N(1+log N) in the code length N. The non-binary SC decoder is (1+log) was proposed. It defined as folded SC decoder[6]. This method based on tree structure [7]. In addition to the folded SC decoder operation enables to construct log N, it alternative pairings of bits are bettor error performance with the same complexity[8]. The tree structure followed below:
Question 4
Systematic variants of polar encoding exits on the other hand, the form of solving the bunch of linear equations are their usual formation [9]. The implementation of all three algorithm form is provided in our package. The interference considers the least complexity algorithm by default at no stated capacity [10].
function d=pencode(u)
% PCparams structure is implicit parameter
% Encode ‘K’ message bits in ‘u’ and
% Return ‘N’ encoded bits in ‘x’
% Polar coding parameters (N,K,FZlookup,Ec,N0,LLR,BITS) are taken
% from “PCparams” structure FZlookup is a vector, to lookup each integer
% index 1:N and check if it is a message-bit location or frozen-bit location.
% FZlookup(i)==0 or 1 ==> bit-i is a frozenbit
% FZlookup(i)==-1 ==> bit-i is a messagebit global PCparams;
% Actual logic:
% d(PCparams.FZlookup == -1) = u; %Dimensions should match. Otherwise ERRR here.
% d(PCparams.FZlookup ~= -1) = PCparams(PCparams.FZlookup==-1);
% Replaced, better logic:
d = PCparams.FZlookup; %loads all frozenbits, incl. -1
d(PCparams.FZlookup == -1) = u; % -1’s will get replaced by message bits below
n=PCparams.n;
for i=1:n
B = 2^(n-i+1);
nB = 2^(i-1);
for j=1:nB
base = (j-1)*B;
for l=1:B/2
d(base+l) = mod( d(base+l)+d(base+B/2+l), 2 );
function [x,y]=systematic_pencode(u,algoname)
% Encode ‘K’ message bits in ‘u’ and
% Return ‘N’ encoded bits in ‘x’
% The order of outputs is chosen so that when assigned to ONE
% variable directly, we get the (default) output as the desired codeword x.
% ‘algoname’ is a character that must be ‘A’ or ‘B’ or ‘C’, as in the paper:
% “Efficient Algorithms for Systematic Polar Encoding”, IEEE Communication Letters,
% Harish Vangala, Yi Hong, and Emanuele Viterbo.
% its default value is ‘A’. (Hence is optional to be supplied explicitly)
% Each of these letters corresponds to a specific algorithm
% Polar coding parameters (N,K,FZlookup,Ec,N0,LLR,BITS) are taken
% from “PCparams” structure FZlookup is a vector, to lookup each integer
% index 1:N and check if it is a message-bit location or frozen-bit location.
% FZlookup(i)==0 or 1 ==> bit-i is a frozenbit
% FZlookup(i)==-1 ==> bit-i is a messagebit
if(nargin==1)
algoname=’A’;
global PCparams;
N=PCparams.N;
n=PCparams.n;
y = PCparams.FZlookup; %loads all frozenbits, incl. -1
x = PCparams.FZlookup;
x(PCparams.FZlookup == -1) = u;
x(PCparams.FZlookup ~= -1) = -1;
if(algoname==’A’)
[y,x]=EncoderA(y,x);
elseif(algoname==’B’)
r=zeros(N,1);
[~,y,x] = EncoderB(1,N,y,x,r);
elseif(algoname==’C’)
r=zeros(N,1);
[y,x,~] = EncoderC(1,N,y,x,r);
fprintf(‘n Invalid Encoder Algorithm %c Supplied! (should be one of A B C)n’,algoname);
The implementation of basic successive cancellation decoder is provided in our package. This is believed as the most efficient implementation[11]. For the educational and research purpose it is distributed freely. This is based on the MATLAB implementation process, therefore we are conscious of the only method to improve is re-implementing in the lower level programing language with the same architecture [12].
function u=pdecode(y)
% PCparams structure is implicit parameter
% y : Received bits in an AWGN (of noise variance N0/2, available via “PCparams”)
% u : Decoded message bits
% Polar coding parameters (N,K,FZlookup,Ec,N0,LLR,BITS) are taken
% from “PCparams” structure FZlookup is a vector, to lookup each integer
% FZlookup(i)==0 or 1 ==> bit-i is a frozenbit
% FZlookup(i)== -1 ==> bit-i is a messagebit
% PCparams.Ec : The encoded bits power before entering AWGN
% PCparams.N0 : 2 times the Noise variance
% PCparams.LLR : Log-Likelihood Ratios data structure for SC decoding
% vector of 1 x 2N-1
% PCparams.BITS : Intermediate bit decisions for SC decoding
% EbN0 : If “SNR” is the signal-to-noise ratio of the AWGN;
% Eb/N0 = (Ec/N0) * (N/K) = (SNR/2) * (N/K)
% TECHNIQUE: Compare the output with a sample output from other perfect decoder implementation
% y = [-2.29054 -2.42021 0.78617 -1.48262 -1.78447 -1.34204 1.82231 2.01136 -0.50112 -1.70260 -2.20256 -1.23027 -1.83809 -0.65077 0.92667 1.07634]
global PCparams;
N=PCparams.N;
% Initializing the likelihoods (i.e. the right end of the butterfly str)
PCparams.LLR = 0; %PCparams.BITS=-1;
initialLRs = -(4*sqrt(PCparams.Ec)/PCparams.N0) * y;
PCparams.LLR(N:2*N-1) = initialLRs;
% Explanation:
% ————
% y(i) = x(i) + n; x in {+sqrt(Ec),-sqrt(Ec)}; n ~ Gaussian(0,N0/2)
% LLR(i) = Pr{y(i) | x(i) = -sqrt(Ec)} / Pr{y(i) | x(i) = +sqrt(Ec)}
% Pr(y|x) = (1/sqrt(2*pi* (N0/2))) * exp( (y-x)^2 / (2*(N0/2)) )
d_hat = zeros(N,1);
finalLRs = zeros(N,1); %DEBUG
for j=1:N
i = bitreversed(j-1,PCparams.n) +1 ; % “+1” is for base-1 indexing
updateLLR(i);
finalLRs(i) = PCparams.LLR(1); %DEBUG
if PCparams.FZlookup(i) == -1
% % TECHNIQUE: Compare the output with a sample output from other perfect decoder implementation
% % DEBUGGING
% fprintf(‘nn N=%d, K=%d, initdB=%.2f’,N,PCparams.K,PCparams.designSNRdB);
% fprintf(‘n FrozenBitsLookups Actual and Expected respectively =n [‘);
% fprintf(‘%d ‘,PCparams.FZlookup); fprintf(‘b]’);
% fprintf(‘n [0 0 0 -1 0 -1 0 -1 0 -1 0 -1 0 -1 -1 -1]nn’);
% fprintf(‘n Received Vectors Actual & Expected respectively =n [‘);
% fprintf(‘%.5f ‘,y’); fprintf(‘b]’);
% fprintf(‘n [-2.29054 -2.42021 0.78617 -1.48262 -1.78447 -1.34204 1.82231 2.01136 -0.50112 -1.70260 -2.20256 -1.23027 -1.83809 -0.65077 0.92667 1.07634]nn’);
% fprintf(‘n Initial Likelihoods Vectors Actual & Expected respectively =n [‘);
% fprintf(‘%.5f ‘,initialLRs’); fprintf(‘b]’);
% fprintf(‘n [4.58109 4.84043 -1.57235 2.96523 3.56893 2.68408 -3.64461 -4.02271 1.00224 3.40520 4.40512 2.46054 3.67618 1.30155 -1.85335 -2.15268]nn’);
% fprintf(‘n FINAL o/p Likelihoods Vectors Actual & Expected respectively =n [‘);
% fprintf(‘%.5f ‘,finalLRs’); fprintf(‘b]’);
% fprintf(‘n [-0.09647 1.25093 1.65714 8.74567 0.39491 6.98428 5.69810 20.24560 -0.69722 -4.64566 4.73276 -19.65780 2.06594 -15.43850 13.90370 -44.99160]nn’);
function u=systematic_pdecode(y)
% PCparams structure is implicit parameter
% y : Received bits in an AWGN (of noise variance N0/2, available via “PCparams”)
% u : Decoded message bits
% Polar coding parameters (N,K,FZlookup,Ec,N0,LLR,BITS) are taken
% from “PCparams” structure FZlookup is a vector, to lookup each integer
% FZlookup(i)==0 or 1 ==> bit-i is a frozenbit
% FZlookup(i)== -1 ==> bit-i is a messagebit
% PCparams.Ec : The encoded bits power before entering AWGN
% PCparams.N0 : 2 times the Noise variance
% PCparams.LLR : Log-Likelihood Ratios data structure for SC decoding
% vector of 1 x 2N-1
% PCparams.BITS : Intermediate bit decisions for SC decoding
% matrix of 2 x N-1
% EbN0 : If “SNR” is the signal-to-noise ratio of the AWGN;
% Eb/N0 = (Ec/N0) * (N/K) = (SNR/2) * (N/K)
global PCparams;
N=PCparams.N;
% Initializing the likelihoods (i.e. the right end of the butterfly str)
PCparams.LLR = 0; %PCparams.BITS=-1;
initialLRs = -(4*sqrt(PCparams.Ec)/PCparams.N0) * y;
PCparams.LLR(N:2*N-1) = initialLRs;
% Explanation:
% y(i) = x(i) + n; x in {+sqrt(Ec),-sqrt(Ec)}; n ~ Gaussian(0,N0/2)
% LLR(i) = Pr{y(i) | x(i) = -sqrt(Ec)} / Pr{y(i) | x(i) = +sqrt(Ec)}
% Pr(y|x) = (1/sqrt(2*pi* (N0/2))) * exp( (y-x)^2 / (2*(N0/2)) )
d_hat = zeros(N,1);
finalLRs = zeros(N,1); %DEBUG
for j=1:N
i = bitreversed(j-1,PCparams.n) +1 ; % “+1” is for base-1 indexing
updateLLR(i);
finalLRs(i) = PCparams.LLR(1); %DEBUG
if PCparams.FZlookup(i) == -1
if PCparams.LLR(1) > 0
d_hat(i) = 0;
d_hat(i) = 1;
d_hat(i) = PCparams.FZlookup(i);
updateBITS(d_hat(i),i);
% The message bits are available after ONE non-systematic-encoding
% operation of the decoded d_hat; available precisely at
% non-frozen locations.
% x_hat = pencode(d_hat(PCparams.FZlookup == -1));
x_hat = FN_transform(d_hat);
u = x_hat ( PCparams.FZlookup==-1 );
References
[1]T. Tanaka, “Properties of a certain stochastic dynamical system, channel polarization, and polar codes”, Journal of Physics: Conference Series, vol. 233, p. 012018, 2010.
[2]E. Arkan, “A performance comparison of polar codes and Reed-Muller codes”, IEEE Communications Letters, vol. 12, no. 6, pp. 447-449, 2008.
[3]J. Lin, High performance decoder architectures for error correction codes. 2015.
[4]P. Giard, C. Thibeault and W. Gross, High-Speed Decoders for Polar Codes. 2017.
[5]”Performance Review of Successive Cancellation Decoding Methods of Polar Codes”, International Journal of Science and Research (IJSR), vol. 5, no. 5, pp. 1507-1510, 2016.
[6]A. Alamdar-Yazdi and F. Kschischang, “A Simplified Successive-Cancellation Decoder for Polar Codes”, IEEE Communications Letters, vol. 15, no. 12, pp. 1378-1380, 2011.
[7]M. Gdeisat and F. Lilley, Matlab by example. London: Elsevier, 2013.
[8]K. Niu, K. Chen, J. Lin and Q. Zhang, “Polar codes: Primary concepts and practical decoding algorithms”, IEEE Communications Magazine, vol. 52, no. 7, pp. 192-203, 2014.
[9]D. Indumathi, “Implementation of Polar Codes in 5g”, International Journal for Research in Applied Science and Engineering Technology, vol., no., pp. 498-500, 2017.
[10]G. Chen, Z. Zhang, C. Zhong and L. Zhang, “A Low Complexity Encoding Algorithm for Systematic Polar Codes”, IEEE Communications Letters, pp. 1-1, 2016.
[11]H. Saber and I. Marsland, “Design of Generalized Concatenated Codes Based on Polar Codes With Very Short Outer Codes”, IEEE Transactions on Vehicular Technology, vol. 66, no. 4, pp. 3103-3115, 2017.
[12]C. Romila, “Digicomm: A MATLAB-Based Digital Communication System Simulator”, IOSR Journal of Electronics and Communication Engineering, vol. 12, no. 03, pp. 38-46, 2017.
Essay Writing Service Features
Our Experience
No matter how complex your assignment is, we can find the right professional for your specific task. Contact Essay is an essay writing company that hires only the smartest minds to help you with your projects. Our expertise allows us to provide students with high-quality academic writing, editing & proofreading services.Free Features
Free revision policy
$10Free bibliography & reference
$8Free title page
$8Free formatting
$8How Our Essay Writing Service Works
First, you will need to complete an order form. It's not difficult but, in case there is anything you find not to be clear, you may always call us so that we can guide you through it. On the order form, you will need to include some basic information concerning your order: subject, topic, number of pages, etc. We also encourage our clients to upload any relevant information or sources that will help.
Complete the order formOnce we have all the information and instructions that we need, we select the most suitable writer for your assignment. While everything seems to be clear, the writer, who has complete knowledge of the subject, may need clarification from you. It is at that point that you would receive a call or email from us.
Writer’s assignmentAs soon as the writer has finished, it will be delivered both to the website and to your email address so that you will not miss it. If your deadline is close at hand, we will place a call to you to make sure that you receive the paper on time.
Completing the order and download