04 December 2017

How cloudy and green will mobile network and services be?

The CLEEN international workshop series is about “Cloud Technologies and Energy Efficiency in Mobile Communication Networks” and during all these years obtained a great interest from both research and industry. Every year the CLEEN workshops collaborated with EU projects and provided a great opportunity for researchers and industry practitioners to share their state-of-the-art research and development results in areas of particular interest.
Next edition, the CLEEN2018 workshop will be co-located with IEEE VTC2018-spring (Porto, 3 June 2018, http://www.ieeevtc.org/vtc2018spring/index.php), where a particular emphasis to edge cloud, MEC and vertical segments will be given, due to the growing interest of these topics toward 5G networks.
CLEEN2018 will have the objective to explore novel concepts to allow for flexibly centralised radio access networks using cloud-processing based on open IT platforms, in coordination with network function virtualization technologies and MEC (Multi-Access Edge Computing), which are recognized as key enablers for the definition of future 5G systems. The aim is to allow for a guaranteed high quality of experience for mobile access to cloud-processing resources and services, and to allow a future network evolution focused on energy efficiency and cost-effectiveness. In fact, all future innovative network solutions will be conceived and deployed with a long term perspective of sustainability, both in terms of energy consumption of mobile network (and related interoperability with terminals) and cost efficiency of the different deployment and management options. This requires new concepts for the design, operation, and optimization of radio access networks, backhaul networks, operation and management algorithms, and architectural elements, tightly integrating mobile networks and cloud-processing. This workshop will cover technologies across PHY, MAC, and network layers, technologies which translate the cloud-paradigm to the radio access and backhaul network, and will analyse the network evolution from the energy efficiency perspective. It will study the requirements, constraints, and implications for mobile communication networks, and also potential relationship with the offered service, both from the academic and the industrial point of view.
Here below the link to the call-for-papers, that we would kindly ask you to promptly forward to your projects/colleagues and interested people.
The CLEEN2018 workshop program is under definition, and we are working hard to organize a great panel discussion with key note speakers selected from highly qualified representatives in the international field.
Stay tuned!
Dario Sabella

INTEL, General Chair of CLEEN2018 workshop

30 November 2017

"Nervous Systems" for Smart Cities... but what about jellyfishes ?


The metaphor of future networks (e.g., SDN/NFV, 5G) becoming the "Nervous System" of Digital Society and Economy has been mentioned several time in this blog.

I remember I made a welcome presentation at EuCNC-2014 showing this picture, elaborating this vision for the first time (at least to my knowledge). In the talk, my take was that technology advances (SDN, NFV, Cloud Computing, AI) are creating the conditions to deploy - in the Digital Society and Economy - a sheer number of pervasive "control-loops" (or if you prefer autonomic control-loops a la MAPE-K) mimicking the role of a "nervous system" in a living being. 


...and in this paper on December, 2014:


...and more recently in this piece:


Today I've stumbled upon this press:



...so we are witnessing progresses in exploiting this vision !

We could even extend this biogical metaphor considering that the traditional view of central nervous system is not valid for some living being, e.g., jellyfish: in fact, they have a ring nervous system, located along the margin of the bell !


It this a lesson learnt from Nature about the value of decentralization in case of asymmetry ?





22 November 2017

Accelerating Network Innovation with an Open, Disaggregated Network Operating System


It has been mentioned several times in this blog (but not only) that the model of an Operating System for future SDN/NFV infrastructures evolving towards 5G, would be a "game changer" at least for reaching three targets:
  • Smart Opex (e.g., with simplified and automated Operations)
  • Smart Capex (e.g., with dynamic enforcements of customer profitability models) 
  • Better Customer Experience and also new Services

I've recently read this very interesting White Paper  from AT&T, on what they call "dNOS", disaggregated Network Operating System. The overall goal is accelerating Network Innovation and the instrumet they see for that is the dNOS.

In the paper, AT&T is mentioning three imperatives:
  • Faster introduction of technologies, designs, and features by means of a collaborative ecosystem of hardware and software component vendors
  • Flexibility in network design and service deployment via plug-n-play hardware and software components that can cost-effectively scale up and down
  • Unit-cost reduction through using merchant silicon, standard hardware and software technology components with very large economies-of-scale wherever appropriate.

White paper looks as a call for hardware and software makers, open source developers, telecom companies, standards bodies and others to others to start thinking about how to develop and push this concept forward.

Personally, I think it's a very interesting initiative !

  



21 November 2017

Why Omics is a "pivotal" use-case for 5G ?

The term omics informally refers to a number of avenues in biology ending in -omics, such as genomicsproteomics or metabolomics.

Let’s focus for a while on genomics. The advances in sequencing technologies is progressively reducing the cost of sequencing a human genome to the order of 1000 $. This is likely to have a big impact on a lot of applicative and societal fields (biology, precision medicine, food industry, etc) which are making use of the massive data and information stored in DNA sequences. 

It is expected that genomics will be more demanding (in terms of processing, storage and networking services) than the three main big data domains, namely astronomy, YouTube and Twitter. At the same time networks and service platforms are going to face a systemic techno-economic transformation (called Softwarization, enabled by SDN-NFV technologies advances): networks and service platforms will evolve to become end-to-end software framwork (integrating processing, storage and networking) supported by hyper-connected links (both fixed and mobile); this will mean more and more flexibility, programmability (with multi-levels APIs) to satisfy - on demand - the new dynamical needs/requirements of big data applications areas such as genomics.



Thus, today is the perfect time to cross the 5G (which is much more than the evolution of the mobile 4G) and genomics communities for demonstrating how SDN-NFV/5G can help enabling a true genomic (and omics) revolution.

New ecosystems and collaborations have to be created between universities, pharmaceutical companies, sequencing machines manufacturers, medicine and biology research centers, hospitals, and services providers/network operators. The changing point is that genomics ecosystem will require in fact not only ultra-broadband connectivity, but also the flexibility of creating and orchestrating on-demand infrastructure slices of resources for processing and storage services of big volumes of data.  

For large public facilities, such as hospitals or research centers, 5G technologies will improve the capability of massive analysis, providing automatic and scalable processing services in 5G networks, relieving them from the burden to manage dedicated computing facilities. The usage of SDN-NFV/5G will allow setting up dedicated virtual networks hooking also logical processing and storage resources where to execute Machine Learning services for big data analysis.   



In summary, gen(-omics) in general is a pivotal use case for 5G, not only for ultra-broadband fixed-mobile connectivity, but also and perhaps especially for aspects of programmability of both application and network services (through APIs), security and privacy management, low cost SDN-NFV integrated solutions for the Omics ecosystems. 

20 November 2017

Which way to the Digital Business Transformation ?

Today we are witnessing a number of intertwining of techno-economic drivers (SDN, NFV, Open Source, IT advances, etc…) which are creating the conditions for a Digital Business Transformation in Telecommunications (and not only: several other socio-economic ecosystems are likely to be impacted in the next 5 years).

In this context, Network Operators and Service Providers are looking for innovative solutions for managing this Digital Business Transformation. As we know, for example ways for: 1) managing the growing “complexity” of an infrastructure with is going to be  cloudified/softwarised: this means addressing management, control and orchestration issues for virtualised networks and services; 2) approaches for improving the Quality of Service/Experience of Customers’, for example: by integrating/orchestrating distributed architectures (e.g., Cloud-Edge-Fog Computing), by adopting more and more Big Data analytics and Computational Intelligence, etc; 3) …and enablers for new digital services and business roles.

Overall, we see for the first time a common “reference model” emerging for both Telecoms Operators and OTTs future infrastructures.

This common “reference model” in fact, in essence, will be based on: 1) a physical layer which will include and integrate compute, storage (IT) and network resources (up to the edge); 2) a virtualization layer which will allow providing high-level abstractions of all the infrastructure resources. 

On top of these layers, there will be the so-called Operating System: the conceptual extension of the OS of a laptop on a infrastructure.

To put it simple, the OS will allow Virtualised Network Functions/services (VNF) to be dynamically combined and orchestrated to create specific end-to-end “service chains” (for serving applications), which will be executed in “slices”, as “isolated” pool of resources, (specifically made available to meet QoS requirements). 



Overall (from 10,000 feet) it looks like a transition from a business model with “90% customized hardware and 10% software" to one that is "10% common hardware and 90% software" for quoting this press.

In reality, the scenario is rather “chaotic”, at least today: there is a “plethora” of open source software platforms and tools are available and other development and…a number of functional architecture being defined in various SDO and Forum.



As a matter of fact, this is a very big transition, with several socio-economic and cultural implication, so it is more than natural that the Telco ecosystem is surfing such a “chaotic” transition…

...but is this the only way to look at this "transformation" ?

Can we "think differently", as the Google and FB for example ? 





12 September 2017

The “Operating System” model for the Digital Society

We are witnessing a number of techno-economic drivers (e.g., global and low costs access to IT and network technologies, moreover accelerating) which are creating the conditions for aCambrian explosion” of new roles, services, value chains, etc… This is true for Telecommunications/ICT and also for several social contexts (e.g., Smart Cities) and industrial ecosystems (e.g., Industry 4.0).

We realize that Telecom infrastructure will have to “tame” a growing “complexity” (e.g., hyper-connectivity, heterogeneity of nodes and systems, high level of dynamism, emerging of non-linear dynamics in feedbacks loops, possible uncontrolled interactions); they will have to be very effective, low-costs and self-adaptable to highly variable context dynamics (e.g., needs of changing strategies with other Players, any-services fast-provisioning and adaptive enforcement of biz policies to end-Users and Vertical Apps requirements, local-vs-global geographical policies, etc).

We’ve been mentioning several time that in order to face such challenges, we need proper innovative paradigms (e.g., based on DevOps, adopting Computational Intelligence, capable of scaling to millions of VM/Containers), to manage the future Softwarized Telecom infrastructures (i.e., based on SDN, NFV, pursuing decoupling of HW from SW, virtualizations anc Cloudification-Edgification of functions and services). And this implies challenges not only technical/engineering but also related to governance, organization, culture, skills, etc…

Now let’s open this vision to extend the concept of infrastructure beyond the Telecoms. Also a Smart City has its own physical infrastructure, which is heterogeneous and includes a complex variety of resources, whose dynamics are intertwined; but also a smart factory in I4.0; they will have to be very effective, low-costs and self-adaptable to highly variable context dynamics.

So my take is that we are facing a sort of non-linear phase transition of a complex system (the intertwining of our Society, Industries, Culture…) whose control variables include (hyper-connectivity, globalization, digitalization, etc). How extracting value from this phase transition?

The models of an Operating System (OS) would represent - for any Industry adopting it – the “strategic and unifying approach” to manage this phase transition. Not only it allows taming the complex oscillations of this transition but also it extracts dynamically value from them, creating and running ecosystems, even new ones.

In the very essence, this requires  virtualization/abstraction of all resources/service/functions (e.g., in broad sense including the ones of a Smart City or a I4.0 Factory) and their secure APIs accesses from both End-Users/Developers, Third Parties and other related Operators.


The future sustainability of the Digital Society is about the flourishing and running of 5G Softwarised Ecosystems.

My take is that we need a system thinking to design this Digital Society OS, capable of enabling dynamical trade-off Slow-Cheap to Fast-Costly vs Flexible-General to Inflexible-Special.

Eventually, look at how Nature implemented it... with a very distributed and resilient approach.


08 September 2017

Talking the language of Softwarization: towards Service2Vectors (part 2)

SDI functions and services modularization can be achieved through Network and Service Primitives NSP: this will increase the level of flexibility, programmability and resilience of the SDI, for example improving agility in software development and operations when using DevOps approaches. On the other hand, there is a cost to pay: it increases the level of complexity of the SDI.

Then, management, control and orchestration (and in general all the OSS/BSS processes) of a SDI should deal with an enormous number of NSP which have to be interconnected/hooked and operated to implement (the logics of) network services and functions. Moreover said NSP should be continuously updated and released.

This can be simplified and above all automated by using a dynamic multi-dimensional services space where coding a distributed representations of all NSP of a SDI. Remeber what is done, for example, in the approaches adopted for the word embedding in Natural Language Processing (NLP). For example see this tutorial on the word2vec model by Mikolov et al. This model is used for learning vector representations of words.

Leveraging on this thinking, I’ve invented a method (service2Vectors) for the distributed representation of NSP with a vector of several elements, each of which is capturing the relationships with other NSP. So, each NSP is represented by a distribution of weights across those elements of the vector, which comes to represent in some abstract way the ‘meaning’ of a NSP. Said NSP vectors can be seen as single points in a high dimensional service space This multi-dimensional space can be created and continuously updated by using Artificial Intelligence (A.I.) learning methods (e.g., recurrent neural networks).

In a SDI there might be thousands or even more different NSPs: all of them create a sort of vocabulary whose terms can be used for expressing  the SDI services (for example through an intent-based language, example below). Let’s assume for example that this vocabulary of NSP has 1000 elements, then each vector representing an NSP will have V = 1000 elements, then the NSP can be represented by a point in a space of 1000 dimensions.

This distributed representations of NSP in a multi-dimensional services space allow A.I. learning algorithms to process the “language” (e.g., intent-based language, example below) used by Application and Users to formulate service requests to the SDI. In fact, NSP vectors can be given as inputs to a recurrent neural network which can be trained, for example, in order to predict a certain service context given a NSP and/or vice-versa a NSP given a certain service context. The learning algorithm could go, for example, through sets of thousands of services context (existing compositions of NSP).

Once the recurrent neural network is trained to make said predictions to some level of accuracy, the output is the so-called space matrix of the trained neural network, capable of projecting any NSP vectors into the space. NSPs with similar context tend to cluster in this space; for example this matrix can be queried to find relationships between NSPs, or the level of similarity between them.

Another alternative is providing distributed representation of SDI service (instead of the single NSP) with a vector of several elements, each of which is capturing the relationships with other SDI services. So, each SDI service is represented by a distribution of weights across those elements of the vector. Said SDI service vectors can be seen as single points in a high dimensional service space This multi-dimensional space can be created and continuously updated by using Artificial Intelligence (A.I.) learning methods (e.g., recurrent neural networks).

This reminds what Prof. Geoff Hinton argued by introducing the term "thought vector": “it is possible to embed an entire thought or sentence — including actions, verbs, subjects, adjectives, adverbs etc. — as a single point (i.e., vector) in a high dimensional space. Then if thought vector structure of human language encodes the key primitives used in human intelligence then SDI services vector structure could encode the key primitives used by “applications intelligence”.

Moreover thought vectors have been observed empirically to possess some properties: on for example is known as "Linear Structure": i.e.,  certain directions in thought-space can be given semantic meaning, and consequently the whole thought vector is geometrical sum of a set of directions or primitives. In the same way certain directions in the SDI service space can be given a context meaning, and consequently whole SDI services vector can be seen geometrical sum of a set of directions or primitive.

Hopefully this will pave the way for  Humans and not-human Users (apps, avatars, smart or cognitive objects, processes, A.I. entities, ....) "to talk" with Softwarised Infrastructures, with a common, quasi-natural language.