Ali Jawad

First year PhD Student


About Me

My name is Ali Jawad FAHS, I'm Lebanese and I'm 23 years old doing my PhD at Univ Rennes, CNRS, IRISA, France. I have a Diploma in Telecommunication and computer science engineering from the Lebanese University Faculty of Engineering (ULFG) Lebanon, July-2017, and a Master's Degree in computer science (Parallel,distributed, and embedded systems) jointly between Université Grenoble Alpes (UGA-IMAG) and institut national polytechnique de grenoble (Grenoble INP - Ensimag) France, June-2017.

Currently I'm working on Distributed Resource scheduling in Fog computing. My main Research interests are: Cloud Computing, Resource Scheduling, and Distributed Systems.

Download my CV

Technical Skills

C/C++ 80%
Python 85%
Networking 90%
Docker Containers 50%
kubernetes 50%
The best way to predict the future is to invent it.Alan Kay (Turing Award Winner 2003)

My PhD

My Thesis is under the title "Decentralized Fog Computing Infrastructure Control". Supervised by Prof. Guillaume PIERRE.

Cloud computing infrastructures are very powerful and flexible, but they are also located very far from their end users. Typical network latencies between an end user and the closest public cloud data center are in the order of 20-40 ms over high-quality wired networks, and 100-150 ms over 4G mobile phone connections. This performance level is acceptable for simple applications such as web browsing, but it makes it impossible to create a wide range of interactive applications. For example, to enable an "instantaneous" feeling, augmented reality applications require that end-to-end latencies (including all networking and processing delays) remain below 20 ms.

To address these issues, a new type of "fog computing" infrastructures is being designed [1,5]. Instead of treating the mobile operator's network as a high-latency dumb pipe between the end users and the external service providers, fog platforms aim to bring cloud resources at the edge of the network, in very close physical proximity with the end users. This is expected to offer extremely low latency between the client devices and the cloud resources serving them.

Fog platforms have very different geographical distribution compared to traditional clouds. Classical datacenter clouds are composed of many reliable and powerful machines located in a very small number of data centers and interconnected by very high-speed networks. In contrast, fogs are composed of a very large number of points-of-presence with a couple of weak and potentially unreliable servers, interconnected with each other by commodity long-distance networks.

However, the management part of current fog computing platforms remains centralized: a single node (or small group of nodes) is in charge of maintaining the list of available server machines, monitoring them, distributing software to them, deciding which server must take care of which task, etc. This organization generates unnecessary long-distance network traffic, does not handle network partitions well, and may even create legal issues if the controller and the compute/storage nodes are located in different jurisdictions.

The goal of this project is to reduce the discrepancy between the broadly distributed compute/storage resources and the -- currently -- extremely centralized control of these resources. We can exploit the fact that the virtual resources in a fog computing platform are in most cases created in immediate proximity of the user(s) who will access them. In this perspective, the platform management processes could be distributed evenly across the infrastructure nodes so that the virtual and physical resources, the users accessing them, and the management processes organizing this system, will be co-located within a few hundred meters from each other. One interesting direction to address this problem -- which remains to be (in)validated by the doctoral student -- is to execute cloud resource scheduling algorithms [7] on every point-of-presence of the system (whereas traditional clouds centralize these algorithms in a single node), and to base the necessary coordination of multiple schedulers on gossiping algorithms [6] between neighboring points-of-presence.

This project is being conducted within the IRISA Myriads team which is working on the design of innovative infrastructures and middleware for future fog computing platforms [2,3,4].


  • [1] "MEC-ConPaaS: An experimental single-board based mobile edge cloud." Alexandre van Kempen, Teodor Crivat, Benjamin Trubert, Debaditya Roy and Guillaume Pierre. In Proceedings of the IEEE Mobile Cloud conference, April 2017.
  • [2] "Kangaroo: A Tenant-Centric Software-Defined Cloud Infrastructure." Kaveh Razavi, Ana Ion, Genc Tato, Kyuho Jeong, Renato Figueiredo, Guillaume Pierre and Thilo Kielmann. In Proceedings of the IEEE International Conference on Cloud Engineering (IC2E), Tempe, AZ, USA, March 2015.
  • [3] "ConPaaS: a Platform for Hosting Elastic Cloud Applications." Guillaume Pierre and Corina Stratan. IEEE Internet Computing 16(5), September-October 2012.
  • [4] "The mobile edge cloud testbed at IRISA Myriads team."
  • [5] "Fog Computing and its Ecosystem." Ramin Elahi, tutorial at the USENIX FAST conference, 2016.<
  • [6] "Gossip-based peer sampling." Mark Jelasity, Spyros Voulgaris, Rachid Guerraoui, Anne-Marie Kermarrec and Maarten Van Steen. ACM Transactions on Computer Systems 25(3), 2007.
  • [7] "A Survey on Resource Scheduling in Cloud Computing: Issues and Challenges." Sukhpal Singh and Inderveer Chana. Journal Grid Computing 14, 2016.

Contact Me


Rennes, France


+33 (0)2 998 47 327