This paper discovers the upcoming generation of cellular communication - the 5th Generation or 5G - with precise prominence on inspiring the use supportive methods within Low Latency Adversarial Transmission Control. The drive of this paper is two-fold, to method and describe 5G from diverse outlooks, and to recognize prospects for communication. Multiple defies related to transmission control are conferred and several elucidations based on supportive methods are reflected. 5G is typically perceived as a conjunction platform, where assorted grids coincide. Recognizing that optimization complications in grid administration are prompted by rules implemented in protocol scheme, paper claim that instead of enhancing current procedures, protocols should be planned with optimization in cognizance from the foundation.

Dynamics of a chaotic spiking neuron model are being studied mathematically and experimentally. The Nonlinear Dynamic State neuron (NDS) is analysed to further understand the model and improve it. Chaos has many interesting properties such as sensitivity to initial conditions, space filling, control and synchronization. As suggested by biologists, these properties may be exploited and play vital role in carrying out computational tasks in human brain. The NDS model has some limitations; in thus paper the model is investigated to overcome some of these limitations in order to enhance the model. Therefore, the model’s parameters are tuned and the resulted dynamics are studied. Also, the discretization method of the model is considered. Moreover, a mathematical analysis is carried out to reveal the underlying dynamics of the model after tuning of its parameters. The results of the aforementioned methods revealed some facts regarding the NDS attractor and suggest the stabilization of a large number of unstable periodic orbits (UPOs) which might correspond to memories in phase space.
...

Further analysis and experimentation is carried out in this paper for a chaotic dynamic model, viz. the Nonlinear Dynamic State neuron (NDS). The analysis and experimentations are performed to further understand the underlying dynamics of the model and enhance it as well. Chaos provides many interesting properties that can be exploited to achieve computational tasks. Such properties are sensitivity to initial conditions, space filling, control and synchronization. Chaos might play an important role in information processing tasks in human brain as suggested by biologists. If artificial neural networks (ANNs) is equipped with chaos then it will enrich the dynamic behaviours of such networks. The NDS model has some limitations and can be overcome in different ways. In this paper different approaches are followed to push the boundaries of the NDS model in order to enhance it. One way is to study the effects of scaling the parameters of the chaotic equations of the NDS model and study the resulted dynamics. Another way is to study the method that is used in discretization of the original R¨ossler that the NDS model is based on. These approaches have revealed some facts about the NDS attractor and suggest why such a model can be stabilized to large number of unstable periodic orbits (UPOs) which might correspond to memories in phase space.
...

Error correcting codes, also known as error controlling codes, are set of codes withredundancy that allows detecting channel errors. This is quite useful in transmitting data over anoisy channel or when retrieving data from a storage with possible physical defects. The ideais to use a set of code words that are maximally distant from each other, hence reducing thechance of changing a code word to another valid one due to noise. The problem can beviewed as picking v codes out of u=2k available codes of k-bit each, such that the aggregate.hamming distance is maximizedAllocating such sets of codes is an optimization problem, which can be described in terms ofseveral components; an objective function f, a vector of variables X = {x1, x2, . . . , xn}, and avector of constraints C = {c1, c2, . . . , cm} which limit the values assigned to X, where n and mcorrespond to the problem dimensions and the total number of constraints, respectively. Then,the solution s, is the set of values assigned to X confined by C, and the solution space S is theset of all possible solutions. The goal is to find the minimum solution s' S where f (s') f (s) for.all sDue to the large solution spaces of such problems, greedy algorithms are sometimes used togenerate quick and dirty solutions. However, evolutionary search algorithms; geneticalgorithms, simulated annealing, swarm particles, and others, represent...

Printed Circuit Board (PCB) fabrication process throughput highly depends on the time of the holesdrilling stages, which is directly related to the number of holes and the order by which the drill bitmove over the holes. A typical PCB may have tens of hundreds of holes, pin pads and vias, andoptimizing the time to complete the drilling can significantly affect the production rate. Moreover, theholes may be of different sizes and to drill two holes of different diameters consecutively, the head ofthe machine has to move to a tool box and change the drilling equipment. This is quite timeconsuming, thus it is better to partition the holes based on the diameter, and drill all holes of the samediameter, change the drill bit, then drill the holes of the next diameter, and so on. In this case, thedrilling problem can be viewed as a series of TSPs, one for each hole diameter and the aim is tominimize the total travel time for the machine head.The Travelling Salesman Problem (TSP) is a well known NP-hard optimization problem thatexemplifies many real life and engineering problems like scheduling problem, and the PCB drillingoptimization is an example of such problems. Finding an optimal solution to the TSP maybeprohibitively large as the number of possibilities to evaluate in an exact search, brute force search, is(n-1)!/2 for n-holes. There exist too many algorithms to solve the TSP in an engineering sense; semioptimalsolution,...

Chaos has important properties that can be exploited to carry out information processing tasks. Such properties are sensitivity to initial conditions, control and synchronization. It has been suggested by biologists that chaos plays an important role in information processing tasks in human brain. One of the chaotic neural networks that has been developed recently is the Nonlinear Dynamic State (NDS) neuron. The model has some limitations and can be enhanced in different ways. There are three aims of this research; one of them is to study the effects of scaling factors of the chaotic attractor of the NDS model - which is based on Rössler model. This research is also aims to reconsider the analytical solutions by tuning the parameters of the model. This research aims to enhance the NDS model in terms of stabilization so that the suggested large number of memories can be exploited. While the Hopfield neural network can give 0.15n memory size (where n is the number of neurons) one NDS neuron may theoretically give an access to large number of unstable periodic orbits (UPOs) which corresponds to memories in phase space.

One of the fundamental problems in coding theory is to determine, for given set of parameters q, n and d, the value Aq(n,d), which is the maximum possible number of code words in a q-ary code of length n and minimum distance d. Codes that attain the maximum are said to be optimal. Being unknown for certain set of parameters, scientists have determined lower bounds, and researchers investigated the use of different evolutionary algorithms for improving lower bounds for a given set of parameters. In this project, we interested in finding the set of maximally distant codes for a certain set of parameters, to provide for error detection and/or correction features. For a practically sized problem, it forms a challenge due to the prohibitively large solution space.

Generally, the optimization is a process with several components: an objective function f, a vector of variables X = {x1, x2, . . . , xn}, and a vector of constraints C = {c1, c2, . . . , cm} which limit the values assigned to X, where n and m correspond to the problem dimensions and the total number of constraints, respectively. Then, the solution s, is the set of values assigned to X confined by C, and the solution space S is the set of all possible solutions. The goal is to find the minimum solution s'∈ S where f (s') ≤ f (s) for all s.

Digital images are modeled as a fine grid of 2D points; each is called a pixel which is the short name of picture element. The color of each pixel is modeled as a discrete 3D vector whose elements are the red component, the green component, and the blue component: that can express any perceivable color by the human visual system. [6, 19]

Virtually all the high end modern digital displays can realize this model with a resolution of 8 bits (i.e. 256 quantum levels) or more for each color component, which is known as true-color digital image display. However, there remains a need on a wide-scale to deal with devices and setups with a limited (sometimes very limited) color display capabilities that each pixel can only be switched to one of a (relatively) small set of colors called palette. Here are few examples of such devices and setups that are in use on wide scale:

Vector quantization (VQ) is a fundamental signal processing operation that attributes a given point in a multidimensional space (i.e. vector) to one of the centroids in a codebook, that in turn is inferred (via some offline codebook-making algorithm like LBG, K-means ... etc.) to optimally represent a population of points (e.g.. features) corresponding to some observable phenomenon [12, 17, 18, 24]. VQ is typically implemented via a "minimum distance" criterion that in turn is an instance of the hard deciding "winner-takes-all" policy.

Our intended project, on the other hand, introduces a novel probabilistic criterion to VQ (ProVQ) that is an instance of a fairer soft-deciding approach. Our probabilistic VQ builds a probability distribution for the belonging of some given point/vector to each centroid in the codebook that is inversely proportional to the distances (i.e. directly proportional to the closeness) between that point and all the codebook’s centroids. The actual runtime arbitration of the given point to a specific centroid is decided via a random election simulator following that probability distribution.

We speculate that our ProVQ - that results in smooth edges separating the different classes – will mitigate the negative effect of over-fitting that degrades the performance of machine learning/classification systems incorporating VQ [11], and may also make these systems more robust with the inevitable noise superimposed to their inputs.

To experimentally attest such a speculation, we will incorporate ProVQ in one of the state-of-the-art discrete HMM-based Arabic type-written OCR systems [2, 3, 12, 32], and hence compare its recognition performance with the...