An efficient Viterbi decoder is introduced in this paper; it is called Viterbi decoder with window system. The simulation results, over Gaussian channels, are performed from rate 1/2, 1/3 and 2/3 joined to TCM encoder with memory in order of 2, 3. These results show that the proposed scheme outperforms the classical Viterbi by a gain of 1 dB. On the other hand, we propose a function called RSCPOLY2TRELLIS, for recursive systematic convolutional (RSC) encoder which creates the trellis structure of a recursive systematic convolutional encoder from the matrix “H”. Moreover, we present a comparison between the decoding algorithms of the TCM encoder like Viterbi soft and hard, and the variants of the MAP decoder known as BCJR or forward-backward algorithm which is very performant in decoding TCM, but depends on the size of the code, the memory, and the CPU requirements of the application.
Prediction of a reliable code for wireless communication systemsiaemedu
1) The document proposes super-orthogonal space-time trellis codes (SOSTTCs) using differential binary phase-shift keying (BPSK), quadriphase shift keying (QPSK) and eight-phase shift keying (8PSK) for noncoherent wireless communication systems without channel state information.
2) A new decoding algorithm is proposed with reduced complexity compared to traditional decoding, but with the same performance. Computer simulations using a geometric two-ring channel model show the performance of the SOSTTCs under different conditions.
3) The performance of the differential SOSTTCs is approximately 3 dB worse than coherent SOSTTCs which have channel state information, but differential encoding has
MDCT audio coding with pulse vector quantizersEricsson
This paper describes a novel audio coding algorithm that is a building block in the recently standardized 3GPP EVS codec. The presented scheme operates in the Modified Discrete Cosine Transform (MDCT) domain and deploys a Split-PVQ pulse coding quantizer, a noise-fill, and a gain control optimized for the quantizer’s properties. A complexity analysis in terms of WMOPS is presented to illustrate that the proposed Split-PVQ concept and dynamic range optimized MPVQ-indexing are suitable for real-time audio coding.
Iaetsd implementation of power efficient iterative logarithmic multiplier usi...Iaetsd Iaetsd
This document describes the design and implementation of a power efficient iterative logarithmic multiplier using Mitchell's algorithm and reversible logic. It involves converting multiplication to addition using logarithmic numbers. The proposed design implements a basic block consisting of leading one detectors, encoders, barrel shifters and a decoder to calculate an approximate product. Error correction circuits are then cascaded with the basic blocks to improve accuracy. The 4x4 reversible logarithmic multiplier is designed and simulated using Xilinx tools, demonstrating lower power consumption through the use of reversible logic.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document discusses video compression using the Set Partitioning in Hierarchical Trees (SPIHT) algorithm and neural networks. It presents the principles of SPIHT coding and the backpropagation algorithm for neural networks. Various neural network training algorithms are tested for compressing video frames, including gradient descent with momentum and adaptive learning. The results show the compressed frames with different algorithms, and gradient descent with momentum and adaptive learning achieved the best compression ratio of 1.1737089:1 while maintaining image clarity.
Fixed Point Realization of Iterative LR-Aided Soft MIMO Decoding AlgorithmCSCJournals
Multiple-input multiple-output (MIMO) systems have been widely acclaimed in order to provide high data rates. Recently Lattice Reduction (LR) aided detectors have been proposed to achieve near Maximum Likelihood (ML) performance with low complexity. In this paper, we develop the fixed point design of an iterative soft decision based LR-aided K-best decoder, which reduces the complexity of existing sphere decoder. A simulation based word-length optimization is presented for physical implementation of the K-best decoder. Simulations show that the fixed point result of 16 bit precision can keep bit error rate (BER) degradation within 0.3 dB for 8×8 MIMO systems with different modulation schemes.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
In this paper, low linear architectures for analyzing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. The min-sum giving out step is to that it produces only two diverse output magnitude values irrespective of the number of incoming bit-to check communication. These new micro-architecture structures would utilize the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption. The decoding algorithms we propose generalize and unify the decoding schemes originally presented the product codes and those of low-density parity-check codes.
A new efficient way based on special stabilizer multiplier permutations to at...IJECEIAES
BCH codes represent an important class of cyclic error-correcting codes; their minimum distances are known only for some cases and remains an open NP-Hard problem in coding theory especially for large lengths. This paper presents an efficient scheme ZSSMP (Zimmermann Special Stabilizer Multiplier Permutation) to find the true value of the minimum distance for many large BCH codes. The proposed method consists in searching a codeword having the minimum weight by Zimmermann algorithm in the sub codes fixed by special stabilizer multiplier permutations. These few sub codes had very small dimensions compared to the dimension of the considered code itself and therefore the search of a codeword of global minimum weight is simplified in terms of run time complexity. ZSSMP is validated on all BCH codes of length 255 for which it gives the exact value of the minimum distance. For BCH codes of length 511, the proposed technique passes considerably the famous known powerful scheme of Canteaut and Chabaud used to attack the public-key cryptosystems based on codes. ZSSMP is very rapid and allows catching the smallest weight codewords in few seconds. By exploiting the efficiency and the quickness of ZSSMP, the true minimum distances and consequently the error correcting capability of all the set of 165 BCH codes of length up to 1023 are determined except the two cases of the BCH(511,148) and BCH(511,259) codes. The comparison of ZSSMP with other powerful methods proves its quality for attacking the hardness of minimum weight search problem at least for the codes studied in this paper.
Prediction of a reliable code for wireless communication systemsiaemedu
1) The document proposes super-orthogonal space-time trellis codes (SOSTTCs) using differential binary phase-shift keying (BPSK), quadriphase shift keying (QPSK) and eight-phase shift keying (8PSK) for noncoherent wireless communication systems without channel state information.
2) A new decoding algorithm is proposed with reduced complexity compared to traditional decoding, but with the same performance. Computer simulations using a geometric two-ring channel model show the performance of the SOSTTCs under different conditions.
3) The performance of the differential SOSTTCs is approximately 3 dB worse than coherent SOSTTCs which have channel state information, but differential encoding has
MDCT audio coding with pulse vector quantizersEricsson
This paper describes a novel audio coding algorithm that is a building block in the recently standardized 3GPP EVS codec. The presented scheme operates in the Modified Discrete Cosine Transform (MDCT) domain and deploys a Split-PVQ pulse coding quantizer, a noise-fill, and a gain control optimized for the quantizer’s properties. A complexity analysis in terms of WMOPS is presented to illustrate that the proposed Split-PVQ concept and dynamic range optimized MPVQ-indexing are suitable for real-time audio coding.
Iaetsd implementation of power efficient iterative logarithmic multiplier usi...Iaetsd Iaetsd
This document describes the design and implementation of a power efficient iterative logarithmic multiplier using Mitchell's algorithm and reversible logic. It involves converting multiplication to addition using logarithmic numbers. The proposed design implements a basic block consisting of leading one detectors, encoders, barrel shifters and a decoder to calculate an approximate product. Error correction circuits are then cascaded with the basic blocks to improve accuracy. The 4x4 reversible logarithmic multiplier is designed and simulated using Xilinx tools, demonstrating lower power consumption through the use of reversible logic.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document discusses video compression using the Set Partitioning in Hierarchical Trees (SPIHT) algorithm and neural networks. It presents the principles of SPIHT coding and the backpropagation algorithm for neural networks. Various neural network training algorithms are tested for compressing video frames, including gradient descent with momentum and adaptive learning. The results show the compressed frames with different algorithms, and gradient descent with momentum and adaptive learning achieved the best compression ratio of 1.1737089:1 while maintaining image clarity.
Fixed Point Realization of Iterative LR-Aided Soft MIMO Decoding AlgorithmCSCJournals
Multiple-input multiple-output (MIMO) systems have been widely acclaimed in order to provide high data rates. Recently Lattice Reduction (LR) aided detectors have been proposed to achieve near Maximum Likelihood (ML) performance with low complexity. In this paper, we develop the fixed point design of an iterative soft decision based LR-aided K-best decoder, which reduces the complexity of existing sphere decoder. A simulation based word-length optimization is presented for physical implementation of the K-best decoder. Simulations show that the fixed point result of 16 bit precision can keep bit error rate (BER) degradation within 0.3 dB for 8×8 MIMO systems with different modulation schemes.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
In this paper, low linear architectures for analyzing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. The min-sum giving out step is to that it produces only two diverse output magnitude values irrespective of the number of incoming bit-to check communication. These new micro-architecture structures would utilize the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption. The decoding algorithms we propose generalize and unify the decoding schemes originally presented the product codes and those of low-density parity-check codes.
A new efficient way based on special stabilizer multiplier permutations to at...IJECEIAES
BCH codes represent an important class of cyclic error-correcting codes; their minimum distances are known only for some cases and remains an open NP-Hard problem in coding theory especially for large lengths. This paper presents an efficient scheme ZSSMP (Zimmermann Special Stabilizer Multiplier Permutation) to find the true value of the minimum distance for many large BCH codes. The proposed method consists in searching a codeword having the minimum weight by Zimmermann algorithm in the sub codes fixed by special stabilizer multiplier permutations. These few sub codes had very small dimensions compared to the dimension of the considered code itself and therefore the search of a codeword of global minimum weight is simplified in terms of run time complexity. ZSSMP is validated on all BCH codes of length 255 for which it gives the exact value of the minimum distance. For BCH codes of length 511, the proposed technique passes considerably the famous known powerful scheme of Canteaut and Chabaud used to attack the public-key cryptosystems based on codes. ZSSMP is very rapid and allows catching the smallest weight codewords in few seconds. By exploiting the efficiency and the quickness of ZSSMP, the true minimum distances and consequently the error correcting capability of all the set of 165 BCH codes of length up to 1023 are determined except the two cases of the BCH(511,148) and BCH(511,259) codes. The comparison of ZSSMP with other powerful methods proves its quality for attacking the hardness of minimum weight search problem at least for the codes studied in this paper.
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...IJERA Editor
In digital communication forward error correction methods have a great practical importance when channel is
noisy. Convolutional error correction code can correct both type of errors random and burst. Convolution
encoding has been used in digital communication systems including deep space communication and wireless
communication. The error correction capability of convolutional code depends on code rate and constraint
length. The low code rate and high constraint length has more error correction capabilities but that also
introduce large overhead. This paper introduces convolutional encoders for various constraint lengths. By
increasing the constraint length the error correction capability can be increased. The performance and error
correction also depends on the selection of generator polynomial. This paper also introduces a good generator
polynomial which has high performance and error correction capabilities.
Fpga implementation of soft decision low power convolutional decoder using vi...ecejntuk
1. This document discusses an FPGA implementation of a soft decision low power convolutional decoder using the Viterbi algorithm.
2. It reviews literature on adaptive Viterbi decoding techniques that can improve error performance and reduce computational requirements compared to the standard Viterbi algorithm.
3. Convolutional encoding with Viterbi decoding is described as a forward error correction technique well-suited for channels with additive white Gaussian noise. The document provides examples of how error rates increase as the signal-to-noise ratio decreases.
Stegnography of high embedding efficiency by using an extended matrix encodin...eSAT Publishing House
This document summarizes an extended matrix encoding algorithm for steganography proposed in a research paper. The algorithm aims to improve the embedding efficiency and rate of the classic F5 steganography system. It does this by extending the hash function used in matrix encoding to multiple layers, allowing more secret bits to be embedded into each carrier cell while still only modifying one bit. The encoding is represented by a quad (dmax, n, k, L) where L indicates the maximum extension layer. Secret bits are tested against specific extended codes up to layer L, and if they match, additional bits can be embedded into the carrier cell. Experimental results showed the extended algorithm performs better than the classic F5 system.
Arithmetic Operations in Multi-Valued LogicVLSICS Design
This paper presents arithmetic operations like addition, subtraction and multiplications in Modulo-4 arithmetic, and also addition, multiplication in Galois field, using multi-valued logic (MVL). Quaternary to binary and binary to quaternary converters are designed using down literal circuits. Negation in modular arithmetic is designed with only one gate. Logic design of each operation is achieved by reducing the terms using Karnaugh diagrams, keeping minimum number of gates and depth of net in to onsideration. Quaternary multiplier circuit is proposed to achieve required optimization. Simulation result of each operation is shown separately using Hspice.
Investigative Compression Of Lossy Images By Enactment Of Lattice Vector Quan...IJERA Editor
In the digital era we live in, efficient representation of data generated by a discrete source and its reliable transmission are unquestionable need. In this work we have focused on source coding taking image as source. Lattice Vector Quantization (LVQ) can be used for source coding as well as for channel coding. (LVQ) with Generator Matrix (GM) and codebook is implemented. When implementation using codebook is done, two codebooks are constructed, one with 256 lattice points that are closest to (0,0,0,0) and another with 256 lattice points that are closest to (1,0,0,0). Energy for both the codes is calculated. When we compare the energy of both the codes we find that codes centered at a non lattice point is lower energy code.
Study of the operational SNR while constructing polar codes IJECEIAES
Channel coding is commonly based on protecting information to be communicated across an unreliable medium, by adding patterns of redundancy into the transmission path. Also referred to as forward error control coding (FECC), the technique is widely used to enable correcting or at least detecting bit errors in digital communication systems. In this paper we study an original FECC known as polar coding which has proven to meet the typical use cases of the next generation mobile standard. This work is motivated by the suitability of polar codes for the new coming wireless era. Hence, we investigate the performance of polar codes in terms of bit error rate (BER) for several codeword lengths and code rates. We first perform a discrete search to find the best operational signal-to-noise ratio (SNR) at two different code rates, while varying the blocklength. We find in our extensive simulations that the BER becomes more sensitive to operational SNR (OSNR) as long as we increase the blocklength and code rate. Finally, we note that increasing blocklength achieves an SNR gain, while increasing code rate changes the OSNR domain. This trade-off sorted out must be taken into consideration while designing polar codes for high-throughput application.
Multiuser MIMO Vector Perturbation Precodingadeelrazi
This paper proposes methods for sum rate optimization in multi-user MIMO systems using vector perturbation precoding. It derives an expression for sum rate in terms of the average transmitted vector energy. It then uses this to obtain a high-SNR upper bound on sum rate and proposes an extension of vector perturbation that allocates different rates to different users. It also proposes a low-complexity user scheduling algorithm as a method for rate allocation.
Design of Quaternary Logical Circuit Using Voltage and Current Mode LogicVLSICS Design
This document describes the design of quaternary logical circuits using voltage mode and current mode logic. It summarizes that quaternary voltage mode logic has 51.78% lower power consumption compared to binary, but requires 3 times more transistors. Quaternary current mode logic has lower area than voltage mode, but higher power consumption. Specifically, it presents the design of quaternary logic gates like inverters, MIN, MAX gates for both modes. Comparative analysis shows voltage mode has lower power while current mode has lower area.
Simulation of Turbo Convolutional Codes for Deep Space MissionIJERA Editor
In satellite communication deep space mission are the most challenging mission, where system has to work at very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details simulation is presented and compare the performance with different interleaver design .
Multi carrier equalization by restoration of redundanc y (merry) for adaptive...IJNSA Journal
This paper proposes a new blind adaptive channel shortening approach for multi-carrier systems. The
performance of the discrete Fourier transform-DMT (DFT-DMT) system is investigated with the proposed
DST-DMT system over the standard carrier serving area (CSA) loop1. Enhanced bit rates demonstrated
and less complexity also involved by the simulation of the DST-DMT system.
Audio coding of harmonic signals is a challenging task for conventional MDCT coding schemes. In this paper we introduce a novel algorithm for improved transform coding of harmonic audio. The algorithm does not deploy the conventional scheme of splitting the input signal into a spectrum envelope and a residual, but models the spectral peak regions. Test results indicate that the presented algorithm outperforms the conventional coding concept.
Reduced Energy Min-Max Decoding Algorithm for Ldpc Code with Adder Correction...ijceronline
In this paper, high linear architectures for analysing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. We proposed the adder and LDPC. The min-sum processing step that it gives only two different output magnitude values irrespective of the number of incoming bit-to check messages. These new micro-architecture layouts would employ the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption.
The MAC architecture is used in real time digital signal processing and multimedia information
processing which requires high throughput. A novel method to estimate the transition activity at
the nodes of a multiplier accumulator architecture based on modified booth algorithm
implementing finite impulse response filter is proposed in this paper. The input signals are
described by a stationary Gaussian process and the transition activity per bit of a signal word is
modeled according to the dual bit type (DBT) model. This estimation is based on the
mathematical formulation by multiplexing mechanism on the breakpoints of the DBT model.
1. The document describes techniques for implementing complex enumeration for multi-user MIMO vector precoding, including the Schnorr-Euchner enumeration algorithm, circular set enumeration, and neighbour expansion methods.
2. A "puzzle enumerator" technique is proposed that divides the complex plane into regions and locally enumerates nodes within each region to identify the most favorable nodes, without requiring distance computations.
3. The puzzle enumerator, circular set enumeration, and neighbour expansion techniques were implemented on an FPGA. The puzzle enumerator achieved the lowest latency and area occupation compared to other techniques since it does not require distance computations or sorting.
Performances Concatenated LDPC based STBC-OFDM System and MRC Receivers IJECEIAES
This document presents a study on the performance of a low density parity check (LDPC) coded orthogonal frequency division multiplexing (OFDM) system using space time block coding (STBC) under various digital modulations and channel conditions. The system incorporates a 3/4 rate convolutional encoder and a LDPC encoder. At the receiver, maximum ratio combining is implemented for channel equalization. Simulation results show that the LDPC coded OFDM system outperforms an uncoded system, and provides lower bit error rates under binary phase shift keying modulation in an additive white Gaussian noise channel.
Hardware Architecture of Complex K-best MIMO DecoderCSCJournals
This paper presents a hardware architecture of complex K-best Multiple Input Multiple Output (MIMO) decoder reducing the complexity of Maximum Likelihood (ML) detector. We develop a novel low-power VLSI design of complex K-best decoder for MIMO and 64 QAM modulation scheme. Use of Schnorr-Euchner (SE) enumeration and a new parameter, Rlimit in the design reduce the complexity of calculating K-best nodes to a certain level with increased performance. The total word length of only 16 bits has been adopted for the hardware design limiting the bit error rate (BER) degradation to 0.3 dB with list size, K and Rlimit equal to 4. The proposed VLSI architecture is modeled in Verilog HDL using Xilinx and synthesized using Synopsys Design Vision in 45 nm CMOS technology. According to the synthesize result, it achieves 1090.8 Mbps throughput with power consumption of 782 mW and latency of 0.33 us. The maximum frequency the design proposed is 181.8 MHz.
This document discusses various methods for digital-to-digital conversion including line coding techniques like unipolar schemes, polar schemes, and bipolar schemes. It also discusses analog-to-digital conversion techniques like pulse code modulation which involves sampling, quantization, and encoding of analog signals into digital form. Additionally, it covers digital-to-analog conversion and analog-to-analog conversion methods like amplitude modulation, frequency modulation, and phase modulation.
Prediction of a reliable code for wireless communication systemsIAEME Publication
This document discusses the development of super-orthogonal space-time trellis codes (SOSTTCs) using differential modulation for noncoherent wireless communication systems without channel state information. SOSTTCs are designed using binary phase-shift keying (BPSK), quadriphase shift keying (QPSK), and eight-phase shift keying (8PSK). A new decoding algorithm is proposed with reduced complexity compared to traditional decoding, while maintaining the same performance. Computer simulations using a geometric two-ring channel model evaluate the performance of the SOSTTCs under different channel and transmission scenarios. The performance of coherent and noncoherent schemes is compared, with coherent achieving approximately 3 dB better than differential at the cost of
Prediction of a reliable code for wireless communication systemsiaemedu
This document discusses the development of super-orthogonal space-time trellis codes (SOSTTCs) using differential modulation for noncoherent wireless communication systems without channel state information. SOSTTCs are designed using binary phase-shift keying (BPSK), quadriphase shift keying (QPSK), and eight-phase shift keying (8PSK). A new decoding algorithm is proposed with reduced complexity compared to traditional decoding, while maintaining the same performance. Computer simulations using a geometric two-ring channel model evaluate the performance of the SOSTTCs under different channel and transmission scenarios. The SOSTTCs using differential encoding are shown to perform approximately 3 dB worse than those using coherent encoding, as expected from
This paper introduces a Simulink model design for a modified fountain code. The code is a new version of the traditional Luby transform (LT) codes.
The design constructs the blocks required for generation of the generator matrix of a limited-degree-hopping-segment Luby transform (LDHS-LT) codes. This code is especially designed for short length data files which have assigned a great interest for wireless sensor networks. It generates the degrees in a predetermined sequence but random generation and partitioned the data file in segments. The data packets selection has been made serialy according to the integer generated from both degree and segment generators. The code is tested using Monte Carlo simulation approach with the conventional code generation using robust soliton distribution (RSD) for degree generation, and the simulation results approve better performance with all testing parameter.
This document summarizes a research paper that proposes using parallel concatenated turbo codes in wireless sensor networks in an adaptive way. The key points are:
1) Turbo codes can achieve near-Shannon limit performance but decoding is complex, making them difficult to implement on energy-constrained sensor nodes.
2) The proposed approach shifts the complex turbo decoding to the base station while sensor nodes implement encoding and basic error correction.
3) At sensor nodes, a parallel concatenated convolutional code (PCCC) circuit encodes data and detects/corrects errors in forwarded packets. This improves energy efficiency and reliability over the wireless sensor network.
In OFDM-IDMA scheme, intersymbol interference (ISI) is resolved by the OFDM layer and multiple access interference (MAI) is suppressed by the IDMA layer at low cost . However OFDM-IDMA scheme suffers high peakto-average power ratio (PAPR) problem. For removing high PAPR problem a hybrid multiple access scheme SC-FDM-IDMA has been proposed. In this paper, bit error rate (BER) performance comparison of SC-FDM-IDMA scheme, OFDM-IDMA scheme and IDMA scheme have been duly presented. Moreover, the BER performance of various subcarrier mapping methods for SC-FDM-IDMA scheme as well as other results with variation of different parameters have also been demonstrated. Finally simulation result for BER performance improvement has been shown employing BCH code. All the simulation results demonstrate the suitability of SC-FDMIDMA scheme for wireless communication under AWGN channel environment.
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...IJERA Editor
In digital communication forward error correction methods have a great practical importance when channel is
noisy. Convolutional error correction code can correct both type of errors random and burst. Convolution
encoding has been used in digital communication systems including deep space communication and wireless
communication. The error correction capability of convolutional code depends on code rate and constraint
length. The low code rate and high constraint length has more error correction capabilities but that also
introduce large overhead. This paper introduces convolutional encoders for various constraint lengths. By
increasing the constraint length the error correction capability can be increased. The performance and error
correction also depends on the selection of generator polynomial. This paper also introduces a good generator
polynomial which has high performance and error correction capabilities.
Fpga implementation of soft decision low power convolutional decoder using vi...ecejntuk
1. This document discusses an FPGA implementation of a soft decision low power convolutional decoder using the Viterbi algorithm.
2. It reviews literature on adaptive Viterbi decoding techniques that can improve error performance and reduce computational requirements compared to the standard Viterbi algorithm.
3. Convolutional encoding with Viterbi decoding is described as a forward error correction technique well-suited for channels with additive white Gaussian noise. The document provides examples of how error rates increase as the signal-to-noise ratio decreases.
Stegnography of high embedding efficiency by using an extended matrix encodin...eSAT Publishing House
This document summarizes an extended matrix encoding algorithm for steganography proposed in a research paper. The algorithm aims to improve the embedding efficiency and rate of the classic F5 steganography system. It does this by extending the hash function used in matrix encoding to multiple layers, allowing more secret bits to be embedded into each carrier cell while still only modifying one bit. The encoding is represented by a quad (dmax, n, k, L) where L indicates the maximum extension layer. Secret bits are tested against specific extended codes up to layer L, and if they match, additional bits can be embedded into the carrier cell. Experimental results showed the extended algorithm performs better than the classic F5 system.
Arithmetic Operations in Multi-Valued LogicVLSICS Design
This paper presents arithmetic operations like addition, subtraction and multiplications in Modulo-4 arithmetic, and also addition, multiplication in Galois field, using multi-valued logic (MVL). Quaternary to binary and binary to quaternary converters are designed using down literal circuits. Negation in modular arithmetic is designed with only one gate. Logic design of each operation is achieved by reducing the terms using Karnaugh diagrams, keeping minimum number of gates and depth of net in to onsideration. Quaternary multiplier circuit is proposed to achieve required optimization. Simulation result of each operation is shown separately using Hspice.
Investigative Compression Of Lossy Images By Enactment Of Lattice Vector Quan...IJERA Editor
In the digital era we live in, efficient representation of data generated by a discrete source and its reliable transmission are unquestionable need. In this work we have focused on source coding taking image as source. Lattice Vector Quantization (LVQ) can be used for source coding as well as for channel coding. (LVQ) with Generator Matrix (GM) and codebook is implemented. When implementation using codebook is done, two codebooks are constructed, one with 256 lattice points that are closest to (0,0,0,0) and another with 256 lattice points that are closest to (1,0,0,0). Energy for both the codes is calculated. When we compare the energy of both the codes we find that codes centered at a non lattice point is lower energy code.
Study of the operational SNR while constructing polar codes IJECEIAES
Channel coding is commonly based on protecting information to be communicated across an unreliable medium, by adding patterns of redundancy into the transmission path. Also referred to as forward error control coding (FECC), the technique is widely used to enable correcting or at least detecting bit errors in digital communication systems. In this paper we study an original FECC known as polar coding which has proven to meet the typical use cases of the next generation mobile standard. This work is motivated by the suitability of polar codes for the new coming wireless era. Hence, we investigate the performance of polar codes in terms of bit error rate (BER) for several codeword lengths and code rates. We first perform a discrete search to find the best operational signal-to-noise ratio (SNR) at two different code rates, while varying the blocklength. We find in our extensive simulations that the BER becomes more sensitive to operational SNR (OSNR) as long as we increase the blocklength and code rate. Finally, we note that increasing blocklength achieves an SNR gain, while increasing code rate changes the OSNR domain. This trade-off sorted out must be taken into consideration while designing polar codes for high-throughput application.
Multiuser MIMO Vector Perturbation Precodingadeelrazi
This paper proposes methods for sum rate optimization in multi-user MIMO systems using vector perturbation precoding. It derives an expression for sum rate in terms of the average transmitted vector energy. It then uses this to obtain a high-SNR upper bound on sum rate and proposes an extension of vector perturbation that allocates different rates to different users. It also proposes a low-complexity user scheduling algorithm as a method for rate allocation.
Design of Quaternary Logical Circuit Using Voltage and Current Mode LogicVLSICS Design
This document describes the design of quaternary logical circuits using voltage mode and current mode logic. It summarizes that quaternary voltage mode logic has 51.78% lower power consumption compared to binary, but requires 3 times more transistors. Quaternary current mode logic has lower area than voltage mode, but higher power consumption. Specifically, it presents the design of quaternary logic gates like inverters, MIN, MAX gates for both modes. Comparative analysis shows voltage mode has lower power while current mode has lower area.
Simulation of Turbo Convolutional Codes for Deep Space MissionIJERA Editor
In satellite communication deep space mission are the most challenging mission, where system has to work at very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details simulation is presented and compare the performance with different interleaver design .
Multi carrier equalization by restoration of redundanc y (merry) for adaptive...IJNSA Journal
This paper proposes a new blind adaptive channel shortening approach for multi-carrier systems. The
performance of the discrete Fourier transform-DMT (DFT-DMT) system is investigated with the proposed
DST-DMT system over the standard carrier serving area (CSA) loop1. Enhanced bit rates demonstrated
and less complexity also involved by the simulation of the DST-DMT system.
Audio coding of harmonic signals is a challenging task for conventional MDCT coding schemes. In this paper we introduce a novel algorithm for improved transform coding of harmonic audio. The algorithm does not deploy the conventional scheme of splitting the input signal into a spectrum envelope and a residual, but models the spectral peak regions. Test results indicate that the presented algorithm outperforms the conventional coding concept.
Reduced Energy Min-Max Decoding Algorithm for Ldpc Code with Adder Correction...ijceronline
In this paper, high linear architectures for analysing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. We proposed the adder and LDPC. The min-sum processing step that it gives only two different output magnitude values irrespective of the number of incoming bit-to check messages. These new micro-architecture layouts would employ the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption.
The MAC architecture is used in real time digital signal processing and multimedia information
processing which requires high throughput. A novel method to estimate the transition activity at
the nodes of a multiplier accumulator architecture based on modified booth algorithm
implementing finite impulse response filter is proposed in this paper. The input signals are
described by a stationary Gaussian process and the transition activity per bit of a signal word is
modeled according to the dual bit type (DBT) model. This estimation is based on the
mathematical formulation by multiplexing mechanism on the breakpoints of the DBT model.
1. The document describes techniques for implementing complex enumeration for multi-user MIMO vector precoding, including the Schnorr-Euchner enumeration algorithm, circular set enumeration, and neighbour expansion methods.
2. A "puzzle enumerator" technique is proposed that divides the complex plane into regions and locally enumerates nodes within each region to identify the most favorable nodes, without requiring distance computations.
3. The puzzle enumerator, circular set enumeration, and neighbour expansion techniques were implemented on an FPGA. The puzzle enumerator achieved the lowest latency and area occupation compared to other techniques since it does not require distance computations or sorting.
Performances Concatenated LDPC based STBC-OFDM System and MRC Receivers IJECEIAES
This document presents a study on the performance of a low density parity check (LDPC) coded orthogonal frequency division multiplexing (OFDM) system using space time block coding (STBC) under various digital modulations and channel conditions. The system incorporates a 3/4 rate convolutional encoder and a LDPC encoder. At the receiver, maximum ratio combining is implemented for channel equalization. Simulation results show that the LDPC coded OFDM system outperforms an uncoded system, and provides lower bit error rates under binary phase shift keying modulation in an additive white Gaussian noise channel.
Hardware Architecture of Complex K-best MIMO DecoderCSCJournals
This paper presents a hardware architecture of complex K-best Multiple Input Multiple Output (MIMO) decoder reducing the complexity of Maximum Likelihood (ML) detector. We develop a novel low-power VLSI design of complex K-best decoder for MIMO and 64 QAM modulation scheme. Use of Schnorr-Euchner (SE) enumeration and a new parameter, Rlimit in the design reduce the complexity of calculating K-best nodes to a certain level with increased performance. The total word length of only 16 bits has been adopted for the hardware design limiting the bit error rate (BER) degradation to 0.3 dB with list size, K and Rlimit equal to 4. The proposed VLSI architecture is modeled in Verilog HDL using Xilinx and synthesized using Synopsys Design Vision in 45 nm CMOS technology. According to the synthesize result, it achieves 1090.8 Mbps throughput with power consumption of 782 mW and latency of 0.33 us. The maximum frequency the design proposed is 181.8 MHz.
This document discusses various methods for digital-to-digital conversion including line coding techniques like unipolar schemes, polar schemes, and bipolar schemes. It also discusses analog-to-digital conversion techniques like pulse code modulation which involves sampling, quantization, and encoding of analog signals into digital form. Additionally, it covers digital-to-analog conversion and analog-to-analog conversion methods like amplitude modulation, frequency modulation, and phase modulation.
Prediction of a reliable code for wireless communication systemsIAEME Publication
This document discusses the development of super-orthogonal space-time trellis codes (SOSTTCs) using differential modulation for noncoherent wireless communication systems without channel state information. SOSTTCs are designed using binary phase-shift keying (BPSK), quadriphase shift keying (QPSK), and eight-phase shift keying (8PSK). A new decoding algorithm is proposed with reduced complexity compared to traditional decoding, while maintaining the same performance. Computer simulations using a geometric two-ring channel model evaluate the performance of the SOSTTCs under different channel and transmission scenarios. The performance of coherent and noncoherent schemes is compared, with coherent achieving approximately 3 dB better than differential at the cost of
Prediction of a reliable code for wireless communication systemsiaemedu
This document discusses the development of super-orthogonal space-time trellis codes (SOSTTCs) using differential modulation for noncoherent wireless communication systems without channel state information. SOSTTCs are designed using binary phase-shift keying (BPSK), quadriphase shift keying (QPSK), and eight-phase shift keying (8PSK). A new decoding algorithm is proposed with reduced complexity compared to traditional decoding, while maintaining the same performance. Computer simulations using a geometric two-ring channel model evaluate the performance of the SOSTTCs under different channel and transmission scenarios. The SOSTTCs using differential encoding are shown to perform approximately 3 dB worse than those using coherent encoding, as expected from
This paper introduces a Simulink model design for a modified fountain code. The code is a new version of the traditional Luby transform (LT) codes.
The design constructs the blocks required for generation of the generator matrix of a limited-degree-hopping-segment Luby transform (LDHS-LT) codes. This code is especially designed for short length data files which have assigned a great interest for wireless sensor networks. It generates the degrees in a predetermined sequence but random generation and partitioned the data file in segments. The data packets selection has been made serialy according to the integer generated from both degree and segment generators. The code is tested using Monte Carlo simulation approach with the conventional code generation using robust soliton distribution (RSD) for degree generation, and the simulation results approve better performance with all testing parameter.
This document summarizes a research paper that proposes using parallel concatenated turbo codes in wireless sensor networks in an adaptive way. The key points are:
1) Turbo codes can achieve near-Shannon limit performance but decoding is complex, making them difficult to implement on energy-constrained sensor nodes.
2) The proposed approach shifts the complex turbo decoding to the base station while sensor nodes implement encoding and basic error correction.
3) At sensor nodes, a parallel concatenated convolutional code (PCCC) circuit encodes data and detects/corrects errors in forwarded packets. This improves energy efficiency and reliability over the wireless sensor network.
In OFDM-IDMA scheme, intersymbol interference (ISI) is resolved by the OFDM layer and multiple access interference (MAI) is suppressed by the IDMA layer at low cost . However OFDM-IDMA scheme suffers high peakto-average power ratio (PAPR) problem. For removing high PAPR problem a hybrid multiple access scheme SC-FDM-IDMA has been proposed. In this paper, bit error rate (BER) performance comparison of SC-FDM-IDMA scheme, OFDM-IDMA scheme and IDMA scheme have been duly presented. Moreover, the BER performance of various subcarrier mapping methods for SC-FDM-IDMA scheme as well as other results with variation of different parameters have also been demonstrated. Finally simulation result for BER performance improvement has been shown employing BCH code. All the simulation results demonstrate the suitability of SC-FDMIDMA scheme for wireless communication under AWGN channel environment.
Design of High Speed and Low Power Veterbi Decoder for Trellis Coded Modulati...ijsrd.com
It is well known that the Viterbi decoder (VD) is the dominant module determining the overall power consumption of TCM decoders. High-speed, low-power design of Viterbi decoders for trellis coded modulation (TCM) systems is presented in this paper. We propose a pre-computation architecture incorporated with -algorithm for VD, which can effectively reduce the power consumption without degrading the decoding speed much. A general solution to derive the optimal pre-computation steps is also given in the paper. Implementation result of a VD for a rate-3/4 convolutional code used in a TCM system shows that compared with the full trellis VD, the precomputation architecture reduces the power consumption by as much as 70% without performance loss, while the degradation in clock speed is negligible.
High Speed Low Power Veterbi Decoder Design for TCM Decodersijsrd.com
It is well known that the Viterbi decoder (VD) is the dominant module determining the overall power consumption of TCM decoders. High-speed, low-power design of Viterbi decoders for trellis coded modulation (TCM) systems is presented in this paper. We propose a pre-computation architecture incorporated with -algorithm for VD, which can effectively reduce the power consumption without degrading the decoding speed much. A general solution to derive the optimal pre-computation steps is also given in the paper. Implementation result of a VD for a rate-3/4 convolutional code used in a TCM system shows that compared with the full trellis VD, the precomputation architecture reduces the power consumption by as much as 70% without performance loss, while the degradation in clock speed is negligible.
An efficient reconfigurable code rate cooperative low-density parity check co...IJECEIAES
In recent days, extensive digital communication process has been performed. Due to this phenomenon, a proper maintenance of authentication, communication without any overhead such as signal attenuation code rate fluctuations during digital communication process can be minimized and optimized by adopting parallel encoder and decoder operations. To overcome the above-mentioned drawbacks by using proposed reconfigurable code rate cooperative (RCRC) and low-density parity check (LDPC) method. The proposed RCRC-LDPC is capable to operate over gigabits/sec data and it effectively performs linear encoding, dual diagonal form, widens the range of code rate and optimal degree distribution of LDPC mother code. The proposed method optimize the transmission rate and it is capable to operate on 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. The proposed method optimizes the transmission rate and is capable to operate on a 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. the proposed method's implementation has been carried out using MATLAB and as per the simulation result, the proposed method is capable of reaching a throughput efficiency greater than 8.2 (1.9) gigabits per second with a clock frequency of 160 MHz.
Distributed Spatial Modulation based Cooperative Diversity Schemeijwmn
: In this paper, a distributed spatial modulation based cooperative diversity scheme for relay
wireless networks is proposed. Where, the space-time block code is exploited to integrate with distributed
spatial modulation. Therefore, the interested transmission scheme achieves high diversity gain. By using
Monte-Carlo simulation based on computer, we showed that our proposed transmission scheme outperforms
state-of-the-art cooperative relaying schemes in terms bit error rate (BER) performance.
FPGA Implementation of Soft Output Viterbi Algorithm Using Memoryless Hybrid ...VLSICS Design
The importance of convolutional codes is well established. They are widely used to encode digital data before transmission through noisy or error-prone communication channels to reduce occurrence of errors and memory. This paper presents novel decoding technique, memoryless Hybrid Register Exchange with simulation and FPGA implementation results. It requires single register as compared to Register Exchange Method (REM) & Hybrid Register Exchange Method (HREM); therefore the data trans-fer operations and ultimately the switching activity will get reduced.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Lightweight hamming product code based multiple bit error correction coding s...journalBEEI
In this paper, we present multiple bit error correction coding scheme based on extended Hamming product code combined with type II HARQ using shared resources for on chip interconnect. The shared resources reduce the hardware complexity of the encoder and decoder compared to the existing three stages iterative decoding method for on chip interconnects. The proposed method of decoding achieves 20% and 28% reduction in area and power consumption respectively, with only small increase in decoder delay compared to the existing three stage iterative decoding scheme for multiple bit error correction. The proposed code also achieves excellent improvement in residual flit error rate and up to 58% of total power consumption compared to the other error control schemes. The low complexity and excellent residual flit error rate make the proposed code suitable for on chip interconnection links.
Design and implementation of log domain decoder IJECEIAES
Low-Density-Parity-Check (LDPC) code has become famous in communications systems for error correction, as an advantage of the robust performance in correcting errors and the ability to meet all the requirements of the 5G system. However, the mot challenge faced researchers is the hardware implementation, because of higher complexity and long run-time. In this paper, an efficient and optimum design for log domain decoder has been implemented using Xilinx system generator with FPGA device Kintex7 (XC7K325T-2FFG900C). Results confirm that the proposed decoder gives a Bit Error Rate (BER) very closed to theory calculations which illustrate that this decoder is suitable for next generation demand which needs a high data rate with very low BER.
FPGA IMPLEMENTATION OF SOFT OUTPUT VITERBI ALGORITHM USING MEMORYLESS HYBRID ...VLSICS Design
The importance of convolutional codes is well established. They are widely used to encode digital data before transmission through noisy or error-prone communication channels to reduce occurrence of errors and memory. This paper presents novel decoding technique, memoryless Hybrid Register Exchange with simulation and FPGA implementation results. It requires single register as compared to Register Exchange Method (REM) & Hybrid Register Exchange Method (HREM); therefore the data trans-fer operations and ultimately the switching activity will get reduced.
Capsulization of Existing Space Time TechniquesIJEEE
1) The document discusses space-time coding techniques used in wireless communication systems to improve reliability of data transmission using multiple transmit antennas.
2) It describes space-time block codes (STBC) such as Alamouti codes and orthogonal designs which transmit redundant copies of data across antennas without loss of data rate.
3) It also discusses space-time trellis codes (STTC) which provide coding gain but have higher complexity than STBCs.
Diversity combining is a technique in wireless network that uses multiple antenna system to improve the quality of radio signal. Mobile radio system suffers multipath propagation due to signal obstruction in the channel. A new hybridized diversity combining scheme consisting of Equal Gain Combining (EGC) and Maximal Ratio Combining (MRC) was proposed in this paper. Theperformance of the hybrid model was evaluated using Outage Probability (Pout) and Processing time (Pt) at different Signal-to-Noise Ratio (SNR) and Signal Paths (L=2,3) for 4-QAM and 8-QAM Modulation Schemes. A mathematical expression for the hybrid EGC-MRC was realized using the Probability Density Function (PDF) of the Nakagami fading channel. MATLAB R2015b software was used for the model simulation. The result shows that hybrid EGC-MRC outperforms the standalone EGC and MRC schemes by having lower Pout and Pt values. Hence, hybrid EGC-MRC exhibits enhanced potentials to mitigate multipath propagation at reduced system complexity.
A NEW HYBRID DIVERSITY COMBINING SCHEME FOR MOBILE RADIO COMMUNICATION SYSTEM...ijcsit
Diversity combining is a technique in wireless network that uses multiple antenna system to improve the quality of radio signal. Mobile radio system suffers multipath propagation due to signal obstruction in the channel. A new hybridized diversity combining scheme consisting of Equal Gain Combining (EGC) and Maximal Ratio Combining (MRC) was proposed in this paper. Theperformance of the hybrid model was evaluated using Outage Probability (Pout) and Processing time (Pt) at different Signal-to-Noise Ratio (SNR) and Signal Paths (L=2,3) for 4-QAM and 8-QAM Modulation Schemes. A mathematical expression for the hybrid EGC-MRC was realized using the Probability Density Function (PDF) of the Nakagami fading channel. MATLAB R2015b software was used for the model simulation. The result shows that hybrid EGC-MRC outperforms the standalone EGC and MRC schemes by having lower Pout and Pt values. Hence, hybrid EGC-MRC exhibits enhanced potentials to mitigate multipath propagation at reduced
system complexity.
New low-density-parity-check decoding approach based on the hard and soft dec...IJECEIAES
It is proved that hard decision algorithms are more appropriate than a soft decision for low-density parity-check (LDPC) decoding since they are less complex at the decoding level. On the other hand, it is notable that the soft decision algorithm outperforms the hard decision one in terms of the bit error rate (BER) gap. In order to minimize the BER and the gap between these two families of LDPC codes, a new LDPC decoding algorithm is suggested in this paper, which is based on both the normalized min-sum (NMS) and modified-weighted bit-flipping (MWBF). The proposed algorithm is named normalized min sum- modified weighted bit flipping (NMSMWBF). The MWBF is executed after the NMS algorithm. The simulations show that our algorithm outperforms the NMS in terms of BER at 10-8 over the additive white gaussian noise (AWGN) channel by 0.25 dB. Furthermore, the proposed NMSMWBF and the NMS are both at the same level of decoding difficulty.
GF(q) LDPC encoder and decoder FPGA implementation using group shuffled beli...IJECEIAES
This paper presents field programmable gate array (FPGA) exercises of the GF(q) low-density parity-check (LDPC) encoder and interpreter utilizing the group shuffled belief propagation (GSBP) algorithm are presented in this study. For small blocks, non-dual LDPC codes have been shown to have a greater error correction rate than dual codes. The reduction behavior of non-binary LDPC codes over GF (16) (also known as GF(q)-LDPC codes) over the additive white Gaussian noise (AWGN) channel has been demonstrated to be close to the Shannon limit and employs a short block length (N=600 bits). At the same time, it also provides a non-binary LDPC (NB-LDPC) code set program. Furthermore, the simplified bubble check treasure event count is implemented through the use of first in first out (FIFO), which is based on an elegant design. The structure of the interpreter and the creation of the residential area he built were planned in very high speed integrated circuit (VHSIC) hardware description language (VHDL) and simulated in MODELSIM 6.5. The combined output of the Cyclone II FPGA is combined with the simulation output.
Iterative network channel decoding with cooperative space-time transmissionijasuc
This document summarizes an iterative network-channel decoding scheme for cooperative space-time transmission with network coding. The scheme uses convolutional codes as network codes at the relay node and Reed-Solomon codes as channel codes at the user nodes. An iterative joint network-channel decoder exchanges soft information between convolutional code-based network decoder and Reed-Solomon code-based channel decoders. Extrinsic information transfer analysis is performed to investigate the convergence properties of the proposed iterative decoder.
Similar to Improving The Performance of Viterbi Decoder using Window System (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Data Communication and Computer Networks Management System Project Report.pdfKamal Acharya
Networking is a telecommunications network that allows computers to exchange data. In
computer networks, networked computing devices pass data to each other along data
connections. Data is transferred in the form of packets. The connections between nodes are
established using either cable media or wireless media.
An In-Depth Exploration of Natural Language Processing: Evolution, Applicatio...DharmaBanothu
Natural language processing (NLP) has
recently garnered significant interest for the
computational representation and analysis of human
language. Its applications span multiple domains such
as machine translation, email spam detection,
information extraction, summarization, healthcare,
and question answering. This paper first delineates
four phases by examining various levels of NLP and
components of Natural Language Generation,
followed by a review of the history and progression of
NLP. Subsequently, we delve into the current state of
the art by presenting diverse NLP applications,
contemporary trends, and challenges. Finally, we
discuss some available datasets, models, and
evaluation metrics in NLP.
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 1, February 2018 : 611 – 621
612
results as seen in AWGN channel with 2, 3 memories. In the third part, we propose a function which is called
RSCPOLY2TRELLIS for recursive systematic convolutional (RSC) encoder which creates the trellis
structure of a recursive systematic convolutional encoder from the matrix “H” and we present a simulation of
the comparison between RSC TCM, 8PSK with Viterbi soft, MAP, Log-Map, and Max-log-map. Finally, the
last section presents the conclusion of this paper.
2. RELATED WORKS
In this section, we present some related works which had used the decoding algorithms: Viterbi,
MAP, and Log-Map moreover the encoder TCM with Ungerboeck mapping and Gray code mapping (UGM).
In [3], Manish Kumar and al had made a comparison in the latency performance of the RSC-RSC
serial concatenated code using non-iterative concatenated Viterbi decoding to RS-RSC serial concatenated
system codes using concatenation of Viterbi & Berklelamp-Massey decoding. The simulation results had
shown that by increasing the code rate, the latency decreases & RSC-RSC is to be a better code rather than
RS-RSC which has low latency. Hence RSC-RSC system is more suiTable for low latency applications.
In [4], Ilesanmi Banjo Oluwafemi improve the performance of two hybrid concatenated Super-
Orthogonal Space-Time Trellis Codes « SOSTTC » topologies over flat fading channels. The encoding
operation is based on the concatenation of convolutional codes, interleaving, and super-orthogonal space-
time trellis codes and the decoding of these two schemes were done by applying iterative decoding process
were the symbol-by-symbol maximum a posteriori (MAP) decoder is used for the inner SOSTTC decoder
and a bit-by-bit MAP decoder is used for the outer convolutional decoder.
In [5], the work of Sameer A Dawood and al had shown the effectiveness of turbo codes to develop
a new approach for an OFDM system based on a Discrete Multiwavelet Critical-Sampling Transform
(OFDM-DMWCST). They used of turbo coding in an OFDM-DMWCST system is useful in providing the
desired performance at higher data rates. Two types of turbo codes were used in this work, i.e., Parallel
Concatenated Convolutional Codes (PCCCs) and Serial Concatenated Convolutional Codes (SCCCs). In both
types, the decoding is performed by the iterative decoding algorithm based on the log-MAP (Maximum A
posteriori) algorithm.
In [6], Bassou and Djebbari introduced a new type of mapping called the Ungerboeck gray trellis
coded modulation (TCM-UGM) for spectral efficiency greater than or equal to 3 bits/s/Hz. This TCM-UGM
code outperforms the performance of Ungerboeck TCM code by 0.26 dB over Gaussian channel and 2.59 dB
over Rayleigh fading channel at BER = 10-5. This technique is combined with our approach to getting more
efficiency.
In [7], Trio and al had proposed a VLSI architecture to implement reversed-trellis TBCC (RT-
TBCC) algorithm. This algorithm is designed by modifying direct-terminating maximum-likelihood (ML)
decoding process to achieve better correction rate which reduces the computational time and resources
compared to the existing solution.
3. TCM : TRELLIS CODED MODULATION
According to Ungerboeck in 1982, whatever the spectral efficiency considered for the transmission
and for a code as complex as it can be, the asymptotic gain of coding given by a TCM is almost maximal
using a single binary element of redundancy per symbol transmitted. Thus, for a TCM constructed from a
constellation with M = 2n+1 points, the spectral efficiency of the transmission is n bits / s / Hz and the
performances of the TCM are compared with those of a modulation to 2n states; that is to say, having a 2-
point constellation. The constellation of a TCM, therefore, has twice as many points as that of the uncoded
modulation having the same spectral efficiency.
Suppose, therefore, that we want to transmit a block of n binary elements coming from the source of
information. It is divided into two blocks of respective lengths, ñ and (n-ñ). The length of ñ block is then
coded with a convolution encoder performance Rc = to v memories (2V states); the second block is
unchanged. The (ñ + 1) bits from the encoder are then used to select a sub-constellation 2n-ñ point while the
(n-ñ) uncoded bits are used to select a particular item in this sub-constellation.
Figure 1 shows the synoptic diagram of an encoder for Ungerboeck TCM.
3. Int J Elec & Comp Eng ISSN: 2088-8708
Improving The Performance Of Viterbi Decoder using Window System (Rekkal Kahina)
613
Figure 1. Synoptic diagram of an encoder for Ungerboeck TCM
3.1. Rules for Building the Trellis
The implementation of the decoder requires the construction of a trellis of the TCM. To build such a
trellis, some rules must be followed if one wishes to maximize the free distance. For this, Ungerboeck
proposes the following three rules:
The M =2n+1 signals of the initial (unpartitioned) constellation must be used with the same
frequency. Figure 2 shows the set Partitioning method applied to the 8PSK [2]
Figure 2. Set Partitioning method applied to the 8PSK [2]
The 2n-ñ parallel branches, if they exist, must be associated with signals belonging to the same 2n-ñ
sub-constellation.
The 2n branches that leave a state or reach a state must be associated with signals belonging to the
same 2n point sub-constellation.
The first rule provides the trellis with a regular pattern. Rules 2 and 3 ensure that the free distance of
the TCM is always greater than the minimum Euclidean distance of the uncoded modulation taken as
reference for the coding gain calculation.
Thus the asymptotic coding gain is:
Ga = 10 log10 ( ) (1)
4. ALGORITHMS OF DECODING TCM ENCODER
The most common decoding is based on the Viterbi algorithm [8]. It consists in finding in the tree
the path which corresponds to the most probable sequence, that is to say, that which is at the minimum
distance of received sequence or the most probable sequence. The following section illustrates the Viterbi
and MAP algorithms.
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 1, February 2018 : 611 – 621
614
4.1. Algorithm of Viterbi
The aim of the maximum likelihood decoding is to look in the trellis code “C” the nearest road
(most likely) of the received sequence (i.e. observation). The distance employed in the algorithm is either the
Euclidean distance, in the case of soft-input, or the hamming distance, in the case of farms inputs.
Thus the decoding problem is: given ( per trellis interval, where determine the
most likely transmitted path through the trellis. If we assume that the uses of the BSC (the binary symmetric
channel) are independent (i.e., we have random errors), the problem reduces to minimizing the Hamming
distance between the { } and our estimate of the{ }, denoted as {̂} :
e ( ̂) = ∑ ∑ ̂ (2)
Figure 3. Transition diagram for Viterbi algorithm
A list of all the transition per state and their values for ( ̂ ̂ ̂ ) are given in Figure 3. For
each trellis transition we do the following. Let , = be the accumulated state metric, that is,
the sum in (1) up to trellis interval i; the input for the discrete time transition is ( )
for each state, :
a) Compute
̂ ̂ ̂ [ ∑ ̂ ] (3)
b) Call the best ̂ ̂ ̂ the winning transition and store it as well as M ( ), the state
metric for the winning transition.
When these steps are implemented, we are left with one single transition path per state, per trellis
interval. The collections of these winning transition paths over time are called survivor paths. The decoding
path is then the minimum of overall survivor paths. In reality, one should wait until survivor paths
merge, that is, when their initial segments coincide. In practice, one stores the result for trellis intervals
and then makes the choice of the best survivor path; that is the path with the smallest m (.).
The Viterbi algorithm, therefore, requires the computation of 2kl metrics at each step t, hence a
complexity of W ˟ , linear in W. However, the complexity remains exponential in k and L, which limits the
use of the codes of small size (k L of 7 to 10 maximum). The width W of the decoding window is taken in
practice to about 5L. This guarantees (empirically) that the survivors converge to a single path inside the
decoding window. The Viterbi algorithm, therefore, requires the storage of cumulative metrics and
survivors of length 5kL bits [9].
4.2. Algorithm of MAP
This algorithm is based on the calculation of the probability of occurrence of a bit (1 or 0) in a
certain position. We have at our disposal a string of length T, which comes from the coding of an information
word of size . The method consists in calculating iteratively the a posteriori probability of each bit, first as a
function of the values of the probabilities for the bits preceding it, and then as a function of the posterior bits.
For this reason, the algorithm is called "forward-backward algorithm". We place equal importance on the
"before" bits and the "after" bits.
Here, Y is the string of bits received and t is the position of the bit in the string. Similarly, we have
denoted , the set of transitions from state to state , as soon as we have had the bit „i‟ at
the input. Let M be the number of possible states.
We try to calculate the log-likelihood ratio logarithm value (λ (u (t)) (log-likelihood ratio) :
(u(t)) = ln [ ] (4)
5. Int J Elec & Comp Eng ISSN: 2088-8708
Improving The Performance Of Viterbi Decoder using Window System (Rekkal Kahina)
615
where u (t) denotes the output of the encoder.
For two given states, one defines a joint probability:
(l‟, l) = P ( =l‟, = )= P (u(t)= i, = ) (5)
is the bit that sends in the l‟to l ( is 0 when there is no transition from 1‟ to 1 ).
We thus have the following relation:
P (u(t)= )=∑ (6)
To calculate σ, we must introduce the joint probability density:
(l)= P (u(t) = i, = l, Y (1:n)) (7)
where we have denoted y (1: n) the elements from “1” to “n” of the vector Y. Similarly, we define the
conditional probability:
(l)= = i, = l) (8)
Using the Bayes rule, we obtain the relation:
(l‟, l) = =
( ) ⁄
(9)
But, following events, because time t does not depend on the sequence received up to this moment,
the expression becomes:
(l‟, l)=
( ) ⁄
(10)
As we can put the term P (Y), we get:
(l‟, l)= (l)* (l) For bit i. (11)
Computing of and
We try to calculate recursively. For this, we write :
(l)= P (U(t)= i, l, Y (1 : t-1), Y(t)) (12)
Then, summing for all the possible transitions from time t-1:
(l)= ∑ ∑ { } For bit i. (13)
Applying the Bayes rule, we find:
(l) = ∑ ∑ { } (l‟) * (l‟,l) (14)
where:
(l‟, l)=P (U(t) = i, l, (15)
therefore include:
(l‟, l)= (l)* (l)*∑ { } (l‟)* (l‟, l) (16)
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 1, February 2018 : 611 – 621
616
by similar calculations, we obtain:
(l)= ∑ ∑ { } (l‟)* (l‟, l) (17)
as characterizes the noise, which is white Gaussian, one can write:
(l‟,l )= P (U (t) =i)* exp (
‖ ‖
) if (l‟, l) (t) 0 if not (18)
Here x (t) is the value that should have been at the output of the encoder, as soon as the state is
changed from state to state.
4.3. Simplified versions of the MAP algorithm
The BCJR algorithm, or MAP, suffers from one important disadvantage: it must carry out many
multiplications. In order to reduce this computational complexity, several simplified versions were
introduced as (SOVA) in 1989 [10], the max-log-MAP algorithm 1990-1994 [11], [12], and the log-MAP
algorithm in 1995 [13]. The multiplication operations are replaced by the addition, and three new variables
are defined as A, B and Γ, as following:
Γk (s‟,s)=ln γk(s‟,s) = ln Ck + ∑
{
{
where
{
( | |
)
(19)
For the convolutional encoder of rate , we use the symbol by symbol MAP Algorithm for non-
binary trellises. Roughly speaking, we can state that the complexity of the BCJR algorithm is about three
times that of the Viterbi algorithm.
5. THE PROPOSED APPROACH
5.1. Viterbi Improved by Window System
We saw in the previous sections how it was possible, using an algorithm, to correct an erroneous
message. In this part, we talk about methods of encoding and decoding information that requires some
computing power, which can cause some problems (for example in embedded systems where there is limited
computing power and the calculation must be done in real time). Finally, only low error percentage messages
(encoded without errors) can be corrected upon receipt. For this purpose, it is proposed in our approach to
using the window system in the encoding and decoding phase of the information in order to minimize the
error on a single window whose length equal to the number of memory plus 1.
In this section, we use the window in convolutional coding and decoding. To explain the difference
between the window system and the classical approach, we choose to present the following example:
Figure 4 shows the structure of the convolutional encoder with rate R=1/2.
Figure 4. Diagram of a convolutional encoder of output
7. Int J Elec & Comp Eng ISSN: 2088-8708
Improving The Performance Of Viterbi Decoder using Window System (Rekkal Kahina)
617
In Table 1 we see that "Y" is a function which depends on the input X and the values of states S0, S1 (X, S0,
S1) [Yi = F (Xi, S0, S1)].
Initialization on values of S0 = 0; S1 = 0;
Figure 5 shows the transmission chain of convolution encoder.
Figure 5. Transmission chain of convolution encoder
Table 1. Scenarios of coding
Xi x0 x1 x2 x3 x4 x5 x6 x7 x8
(F)
Xi
S0
S1
x0
0
0
x1
x0
0
x2
x1
x0
x3
x2
x1
x4
x3
x2
x5
x4
x3
x6
x5
x4
x7
x6
x5
x8
x7
x6
Yi y0 y1 y2 y3 y4 y5 y6 y7 y8
According to Table 1, we can distinguish that most of the “Xi” survive on many “Yi” by the size of
the constraint of the code (in our case, the size of the constraint equals three (03)).
That is to say that the code y0, for its encoding, depend on “X0, S0, S1”, y1 depending on “X1, X0,
S1”, and the code y2 depends on “X2, X1, X0”. Hence, we see that X0 appears three times for the encoding
of “Y0, Y1, and Y2”.
The disadvantage of this phenomenon is the decoding by the VITERBI algorithm. If all the bits of
"Yi" are erroneous, the algorithm decodes a "Xi" erroneous, and this error expands on the size of the
constraint of the code and consequently, all that follows is wrong. To overcome this phenomenon we
propose the window on the size of the constraint of the code as indicated in Table 2.
a. The size of the constraint is 3 (it is the number of memory +1).
Initialization: S0 = 0; S1 = 0;
b. If all the bits of a "Yi" are erroneous, the algorithm decodes a "Xi "wrong, but only on a window.
Table 2. Convolutional encoding by the window system
Figure 6 shows a transmission chain of a convolutional encoder under the constraint length.
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 1, February 2018 : 611 – 621
618
Figure 6. A transmission chain of a convolutional encoder under the constraint length
5.2. RSCpoly2trellis for Recursive Systematic Convolutional (RSC) Encoder
The proposed function RSCpoly2trellis Syntax: Trellis = RSCpoly2trellis (H);
Where RSCpoly2trellis function accepts a polynomial description of a recursive systematic
convolutional (RSC) encoder and returns the corresponding trellis structure description. The output of
RSCpoly2trellis is suiTable as an input to the convenc and vitdec functions and as a mask parameter for the
Convolutional Encoder, Viterbi Decoder in the Communications Block.
Figure 7 shows the recursive systematic convolutional encoder with rate which is represented by
the matrix H:
[ ]
Figure 7. Recursive systematic convolutional encoder with rate
We describe the function which creates the trellis structure of a recursive systematic convolutional
encoder from the matrix H.
We have those parameters:
a. Number of input symbols : numInputSymbols) which equal to 2(The sum of the rows of the matrix
H) -1
b. Number of output symbols : (numOutputSymbols) which equal to 2(The sum of the rows of the
matrix H)
c. Number of states (numState) which equal to 2(The sum of the columns of the matrix H) -1
d. Matrix of next states (nextState) which has a dimension of 2numStatesx2numInputSymbols and
next state equals to :
{
9. Int J Elec & Comp Eng ISSN: 2088-8708
Improving The Performance Of Viterbi Decoder using Window System (Rekkal Kahina)
619
e. The matrix of output symbols (outputs) which has a dimension of 2numStatex2numInputSymbols
and the outputs equal to :
{
6. RESULTS AND DISCUSSION
Figure 8, Figure 9 and Figure 10 show a comparison between the classical Viterbi decoder and the
Viterbi with window system decoder of TCM QPSK/8PSK and TCM joined by a new type of mapping called
the Ungerboeck gray trellis coded modulation (TCM-UGM) described in section 2 “Related works” with
rates of 1/2,1/3 and 2/3.
a. We can observe, at high signal-to-noise ratios, that the simulation curve using Viterbi with
window system outperforms the classical Viterbi by a gain equal to 1 dB at BER= .
b. In order to investigate more performance, the Ungerboeck gray mapping is considered with
TCM encoder. The simulation result using TCM-UGM and the Viterbi decoder with window
system outperforms TCM and the Viterbi decoder with window system by a gain equal to 2.7 dB
(approx) and outperforms TCM with a classical Viterbi decoder by a gain of 3.8 dB (approx).
Figure 8. Comparison between 4 states TCM QPSK,
TCM-Win QPSK and TCM-UGM-Win QPSK over
AWGN channel
Figure 9. Comparison between 4 states TCM 8PSK,
TCM-Win 8PSK and TCM-UGM-Win 8PSK over
AWGN channel
Figure 10. Comparison between 8 states TCM 8PSK,
TCM-Win 8PSK and TCM-UGM-Win 8PSK over
AWGN channel
Figure 11. Comparison between 4 states TCM QPSK
with Viterbi hard decoder and Viterbi soft decoder
over AWGN channel.
10. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 1, February 2018 : 611 – 621
620
The simulations illustrated in Figure 11 show that 4-States TCM-QPSK with Viterbi soft decoder
outperforms 4-States TCM-QPSK with the Viterbi hard decoder, where a gain of 4 dB can be achieved by
using Viterbi soft decoder at BER= .
Figure 12 shows, at high signal-to-noise ratios, that the curve of log-map is the same as max-log-
map decoders but both outperform the Viterbi soft and map algorithms and the gain can be easily obtained
from the log-map or max-log-map which equal to 2 dB at BER compared to the Viterbi soft. The
BER performance is as follows: BER Log-Map = BER max-log-map < BER Vitrebi Soft < BER MAP.
Figure 12. Comparison between RSC TCM 8PSK with a MAP, Log-Map, and Max-log-map and Viterbi soft
decoders over AWGN channel.
7. CONCLUSION
In this paper, the simulation in MATLAB was used to evaluate the performance of Viterbi using
window system compared to the classical Viterbi. The simulation results over AWGN channel with rates
1/2, 1/3 and 2/3, have shown that at a BER of 10-6, the Viterbi decoder with window system outperforms the
classical Viterbi by 1 dB. Moreover, we propose the use of the Ungerbeock gray mapping to achieve more
performance with TCM encoder where the gain of 2.7dB was achieved compared to the TCM with the
Viterbi decoder by window system and the gain of 3.8dB is observed compared to the original TCM with the
classical Viterbi decoder.
From the above results, it can also be seen that with rate 2/3 the TCM with recursive systematic
convolutional (RSC) encoder with log-map or max-log-map decoders gives better results than Viterbi soft
with a gain of 2 dB at BER = 10-4 .
It is also clearly shown that Viterbi soft outperforms the MAP algorithm which is known as being
greedy in memory space and in the computing time which increases more if we increase the rate and the
number of memory. Consequently, the decoding operation becomes long. For this reason, they have
simplified MAP with others varying as log-MAP and max-log-MAP that are used on the new trend of
encoders like turbo-code where the researchers are placed a lot of hopes in this last technique (turbo codes)
because we approach the limit given by the second theorem of Shannon.
REFERENCES
[1] L.Conde Canancia, “Turbo-codes et modulation à grande efficacité spectrale,” these de doctorat de l‟université de
Bretagne Occidentale, France, Juin 2004.
[2] Ungerboeck, “Channel coding with multilevel/phase signals,” IEEE Transactions on Iinformation Theory, vol. IT-
28,N 1, Jan.1982,pp.55-67.
[3] Manish Kumar, Jyoti Saxena, Performance Comparison of Latency for RSC-RSC and RS-RSC Concatenated
Codes , Indonesian Journal of Electrical Engineering and Informatics (IJEEI), Vol. 1, No. 3, September 2013, pp.
78~83, ISSN: 2089-3272, DOI: 10.11591/ijeei.v1i3.77.
[4] Ilesanmi Banjo Oluwafemi, Hybrid Concatenated Coding Scheme for MIMO Systems, International Journal of
Electrical and Computer Engineering (IJECE), Vol. 5, No. 3, June 2015, pp. 464~476, ISSN: 2088-8708.
[5] Sameer A Dawood, F. Malek, MS Anuar, HA Rahim, Enhancement the Performance of OFDM based on
Multiwavelets Using Turbo Codes, TELKOMNIKA (Telecommunication Computing Electronics and Control), Vol.
11. Int J Elec & Comp Eng ISSN: 2088-8708
Improving The Performance Of Viterbi Decoder using Window System (Rekkal Kahina)
621
13, No. 4, December 2015, pp. 1225~1232, ISSN: 1693-6930, accredited A by DIKTI, Decree No:
58/DIKTI/Kep/2013.
[6] A. Bassou and A. Djebbari, “Contribution to the Improvement of the Performance of Trellis-Coded Modulation,”
WSEAS Transactions on Communications, Vol. 6, No. 2, pp. 307-311, February 2006.
[7] Trio Adiono, Ahmad Zaky Ramdani, Rachmad Vidya Wicaksana Putra, “Reversed-Trellis Tail-Biting
Convolutional Code (RT-TBCC) Decoder Architecture Design for LTE”, International Journal of Electrical and
Computer Engineering (IJECE), Vol 8, No 1 February 2018.
[8] G.D.Jr.Forney, “The Viterbi Algorithme,, “ IEEE Proceedings, Mar. 1974.
[9] O. Pothier « codage de canal et turbo code cours 1 : introduction générale au codage de canal », Support de cours,
ENST Brest 2000. Source.
[10] J. Hagenauer and P. Hoeher, “A Viterbi Algorithm With Soft-Decision Outputs and Its Applications”, Proceedings
of GLOBECOM ‟89, Dallas, Texas, pp. 47.1.1-47.1.7, November 1989.
[11] W. Koch and A. Baier, “Optimum and sub-optimum detection of coded data disturbed by timevarying inter-symbol
interference,” Proceedings of IEEE Globecom, pp. 1679–1684, December 1990.
[12] J. A. Erfanian, S. Pasupathy and G. Gulak, “Reduced complexity symbol detectors with parallel structures for ISI
channels,” IEEE Trans. Communications, vol. 42, pp. 1661–1671, 1994.
[13] P. Robertson, E. Villebrun and P. Hoeher, “A comparison of optimal and sub-optimal MAP decoding algorithms
operating in the log domain”, Proc. Intern. Conf. Communications (ICC), pp. 1009–1013, June 1995.