About

“Computation is what happens dynamically to information from one moment to the next.” – Susan Stuart, and Gordana Dodig-Crnkovic “Computation, Information, Cognition: The Nexus and the Liminal” Cambridge Scholars Publishing, Newcastle, 2007.

“Normally, a literary description of what an automaton is supposed to do is simpler than the complete diagram of the automaton. It is not true a priori that this always will be so. There is a good deal in formal logic which indicates that when an automaton is not very complicated the description of the function of the automaton is simpler than the description of the automaton itself, as long as the automaton is not very complicated, but when you get to high complications, the actual object is much simpler than the literary description.” – (von Neumann, J. (1987). Papers of John von Neumann on Computing and Computing Theory, Hixon Symposium, September 20, 1948, Pasadena, CA, The MIT Press, p454 – p457)).

The focus is changing in the industry from conventional computing based IT to a new IT exploiting the neural networks based computing. The 1st International Workshop on Software Engineering for Cognitive Services held in the 40th International Conference on Software Engineering (May 27 – June 3 2018) in Gothenburg, Sweden set a new theme for developing a unified approach for developing, deploying and managing both symbolic computing based IT and Neural Network based IT.

Workshop Theme:

AI/ML/DL have made great progress in application to specific domain problems in creating insights from data. However, data science workflow is very complex, labor intensive and time consuming. We need new software engineering practices to leverage Cloud computing, efficient, cost-effective non-functional requirement fulfilment and acceleration of algorithm convergence. What can Software Engineering profession do to scale the applications to create real-time self-managing (cognitive) workloads which interact with other cognitive systems? New computing models with cognitive computing overlay have demonstrated that the complexity of workload management can be reduced and computation management can be accelerated with improved productivity. How can Software Engineering practices adopt these and leverage globally interoperable cloud resources that are becoming ubiquitous?

Since John von Neumann introduced the stored program implementation of the Turing Machine, we have seen an explosion of information processing to model various processes and execute them to deliver efficiency at scale and resilience. Interestingly the same implementation of the Turing Machine has given us two distinct computing models. First, symbolic computing outperforms humans in tasks that can be easily described by a list of formal, mathematical rules or a sequence of event driven actions such as modeling, simulation, business workflows, interaction with devices, etc. Second, the neural network or connectionist computing model allows computers to learn and understand the world in terms of a hierarchy of concepts with associated inter-relationships to perform tasks that are easy to do “intuitively”, that feel automatic, but are hard to describe formally or a sequence of event driven actions such as recognizing spoken words or faces. The uncanny resemblance of these model to biological computing models – information processing using the genes in DNA and the neural networks in the neocortex – have brought cognition into computing and the word has become the anchor for future of computing itself.

However, current state of the art of Information technologies (often hijacked by the vendor-driven infrastructure technologies) are fragmented and evolving in siloes such as:
Symbolic computation based process automation with its own software engineering practices

AI/ML/DL implementations of information processing to create insights and

Infrastructure Technology advances that provide computing resources at scale and resilience in the form of cloud islands.

With all the advances at hand, we still have to “catch the errors, explain them and correct them” to move on. On the other hand, biology teaches us that the introduction of cognition (which provides a functional characterization of operations of a system, receives inputs from other cognitive systems and provides outputs to other cognitive systems to influence their behavior) has given the ability to execute the processes without requiring a reboot.

Recent advances in introducing a cognitive control overlay allows the implementation of process execution without reboot. This workshop is aimed at bringing experts in Software Engineering practice, AI/ML/DL data scientists who have implemented systems at scale and infrastructure technologists shaping the next generation cloud and computing resources to discuss a unified approach moving forward leveraging the best practices in all these areas.

This workshop is started with the belief that we need a foundational transformation in how we design, develop and manage information services in the spirit of what Alan Turing and John von Neumann did in sowing the seeds for today’s IT. We need new computing paradigms to deal with deterministic algorithms facing non-deterministic interactions in their execution. Software functions, structures in distributed execution environments and fluctuations at scale require a new generation of self-managing software systems that adjust and adopt themselves without requiring a reboot to address communication, collaboration and commerce at the speed of light demanded by current consumers of IT.

Paris, June 13-16, 2016

This conference started from the workshop on Cloud Computing in WETICE2009 has produced a major new idea of “Cognitive Application Networks” using the new DIME Network Architecture discussed in subsequent conferences through 2015. Currently a commercial implementation of this concept clearly demonstrates the power of infusing cognition into computing through the implementation of managed Turing machines as discussed in many papers in WETICE conferences.

Dr. Rao Mikkilineni received his PhD from University of California, San Diego in 1972 working under the guidance of prof. Walter Kohn (Nobel Laureate 1998). He later worked as a research associate at the University of Paris, Orsay, Courant Institute of Mathematical Sciences, New York and Columbia University, New York.

He is currently the Co-Founder and Chief Scientist at C3DNA Inc., California, a Silicon Valley startup developing next generation computing infrastructure. His past experience includes working at AT&T Bell Labs, Bellcore, U S West, several startups and more recently at Hitachi Data Systems. He has lived and worked in USA, France and Japan.

Dr. Giovanni Morana received his PhD from the University of Catania, Italy and is currently the CTO and R&D head at C3DNA a Silicon Valley company developing cognitive distributed computing infrastructure. His past experience includes working at University of Catania and ST Microelectronics. He also co-chairs the IEEE conference track on the convergence of distributed clouds, Grids and their management in WETICE2015.

Dr. Fabrizio Messina received his PhD from University of Catania, Italy and is currently a researcher at the University of Catania. He has contributed many papers in grid computing, cloud computing and distributed systems management.