Ali Jawad

Ph.D. Cloud/Edge Computing
MS. Parallel, Distributed, and Embedded Systems
ING. Telecommunication Engineering


About Me

My name is Ali Jawad FAHS, I finished my PhD at Univ Rennes, Inria, CNRS, IRISA in France. I have a Diploma in Telecommunication and computer science engineering from the Lebanese University Faculty of Engineering (ULFG) Lebanon, July-2017, and a Master's Degree in computer science (Parallel,distributed, and embedded systems) jointly between Université Grenoble Alpes (UGA-IMAG) and institut national polytechnique de grenoble (Grenoble INP - Ensimag) France, June-2017.

My work was based on improving the infrastructure of fog computing. Providing this infrastructure with the tools needed to implement key features of fog. We have focused on adding the right middle-wares that assure Proximity-awareness over a widely distributed fog nodes, and this can be summarized in the following three steps:
       Proxy-mity: Proximity-aware Routing (Published 05/2019 at CCGrid 2019)
       Hona: Proximity-aware Placement (Published 12/2020 at ICSOC 2020)
       Voilà: Proximity-aware Autoscaling (Published 11/2020 at MASCOTS 2020)

Download my CV
The best way to predict the future is to invent it.Alan Kay (Turing Award Winner 2003)

My PhD

My Thesis is under the title "Decentralized Fog Computing Infrastructure Control". Supervised by Prof. Guillaume PIERRE.

Cloud computing infrastructures are very powerful and flexible, but they are also located very far from their end users. Typical network latencies between an end user and the closest public cloud data center are in the order of 20-40 ms over high-quality wired networks, and 100-150 ms over 4G mobile phone connections. This performance level is acceptable for simple applications such as web browsing, but it makes it impossible to create a wide range of interactive applications. For example, to enable an "instantaneous" feeling, augmented reality applications require that end-to-end latencies (including all networking and processing delays) remain below 20 ms.

To address these issues, a new type of "fog computing" infrastructures is being designed [1,5]. Instead of treating the mobile operator's network as a high-latency dumb pipe between the end users and the external service providers, fog platforms aim to bring cloud resources at the edge of the network, in very close physical proximity with the end users. This is expected to offer extremely low latency between the client devices and the cloud resources serving them.

Fog platforms have very different geographical distribution compared to traditional clouds. Classical datacenter clouds are composed of many reliable and powerful machines located in a very small number of data centers and interconnected by very high-speed networks. In contrast, fogs are composed of a very large number of points-of-presence with a couple of weak and potentially unreliable servers, interconnected with each other by commodity long-distance networks.

However, the management part of current fog computing platforms remains centralized: a single node (or small group of nodes) is in charge of maintaining the list of available server machines, monitoring them, distributing software to them, deciding which server must take care of which task, etc. This organization generates unnecessary long-distance network traffic, does not handle network partitions well, and may even create legal issues if the controller and the compute/storage nodes are located in different jurisdictions.

The goal of this project is to reduce the discrepancy between the broadly distributed compute/storage resources and the -- currently -- extremely centralized control of these resources. We can exploit the fact that the virtual resources in a fog computing platform are in most cases created in immediate proximity of the user(s) who will access them. In this perspective, the platform management processes could be distributed evenly across the infrastructure nodes so that the virtual and physical resources, the users accessing them, and the management processes organizing this system, will be co-located within a few hundred meters from each other. One interesting direction to address this problem -- which remains to be (in)validated by the doctoral student -- is to execute cloud resource scheduling algorithms [7] on every point-of-presence of the system (whereas traditional clouds centralize these algorithms in a single node), and to base the necessary coordination of multiple schedulers on gossiping algorithms [6] between neighboring points-of-presence.

This project is being conducted within the IRISA Myriads team which is working on the design of innovative infrastructures and middleware for future fog computing platforms [2,3,4].


  • [1] "MEC-ConPaaS: An experimental single-board based mobile edge cloud." Alexandre van Kempen, Teodor Crivat, Benjamin Trubert, Debaditya Roy and Guillaume Pierre. In Proceedings of the IEEE Mobile Cloud conference, April 2017.
  • [2] "Kangaroo: A Tenant-Centric Software-Defined Cloud Infrastructure." Kaveh Razavi, Ana Ion, Genc Tato, Kyuho Jeong, Renato Figueiredo, Guillaume Pierre and Thilo Kielmann. In Proceedings of the IEEE International Conference on Cloud Engineering (IC2E), Tempe, AZ, USA, March 2015.
  • [3] "ConPaaS: a Platform for Hosting Elastic Cloud Applications." Guillaume Pierre and Corina Stratan. IEEE Internet Computing 16(5), September-October 2012.
  • [4] "The mobile edge cloud testbed at IRISA Myriads team."
  • [5] "Fog Computing and its Ecosystem." Ramin Elahi, tutorial at the USENIX FAST conference, 2016.<
  • [6] "Gossip-based peer sampling." Mark Jelasity, Spyros Voulgaris, Rachid Guerraoui, Anne-Marie Kermarrec and Maarten Van Steen. ACM Transactions on Computer Systems 25(3), 2007.
  • [7] "A Survey on Resource Scheduling in Cloud Computing: Issues and Challenges." Sukhpal Singh and Inderveer Chana. Journal Grid Computing 14, 2016.


   Ali J. Fahs, Guillaume Pierre, and Erik Elmroth. "Voilà: Tail-latency-aware fog application replicas autoscaler" 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS 2020).

Latency-sensitive fog computing applications may use replication both to scale their capacity and to place application instances as close as possible to their end users. In such geo-distributed environments, a good replica placement should maintain the tail network latency between end-user devices and their closest replica within acceptable bounds while avoiding overloaded replicas. When facing non-stationary workloads it is essential to dynamically adjust the number and locations of a fog application's replicas. We propose Voilà, a tail-Iatency-aware auto-scaler integrated in the Kubernetes orchestration system. Voila maintains a fine-grained view of the volumes of traffic generated from different user locations, and uses simple yet highly-effective procedures to maintain suitable application resources in terms of size and location.

   Ali J. Fahs and Guillaume Pierre. "Tail-Latency-Aware Fog Application Replica Placement" 18th International Conference on Service Oriented Computing (ICSOC 2020)

Latency-sensitive applications often use fog computing platforms to place replicas of their services as close as possible to their end users. A good placement should guarantee a low tail network latency between end-user devices and their closest replica while keeping the replicas load balanced. We propose a latency-aware scheduler integrated in Kubernetes which uses simple yet highly-effective heuristics to identify suitable replica placements, and to dynamically update these placements upon any evolution of user-generated traffic.

   Ali J. Fahs and Guillaume Pierre. "Proximity-Aware Traffic Routing in Distributed Fog Computing Platforms" 19th International Symposium in Cluster, Cloud, and Grid Computing (CCGrid 2019).

Container orchestration engines such as Kubernetes do not take into account the geographical location of application replicas when deciding which replica should handle which request. This makes them ill-suited to act as a general-purpose fog computing platforms where the proximity between end users and the replica serving them is essential. We present proxy-mity, a proximity-aware traffic routing system for distributed fog computing platforms. It seamlessly integrates in Kubernetes, and provides very simple control mechanisms to allow system administrators to address the necessary trade-off between reducing the user-to-replica latencies and balancing the load equally across replicas. proxy-mity is very lightweight and it can reduce average user-to-replica latencies by as much as 90% while allowing the system administrators to control the level of load imbalance in their system.

   Olivier Alphand et al. "Eviter les collisions dans les réseaux 6TiSCH" Rencontres Francophones sur la Conception de Protocoles, l’Évaluation de Performance et l’Expérimentation des Réseaux de Communication (CORES 2018).

Résumé :
Les réseaux multi-sauts 802.15.4e TSCH s'appuient sur des échéanciers de communication efficaces. Notre solution complète les algorithmes de construction de tels échéanciers en limitant la réutilisation de cellules temps-fréquence déjà utilisées par des noeuds voisins. Nos simulations montrent un gain significatif par rapport à l'existant.

   Ali J. Fahs et al. "Collision prevention in distributed 6TiSCH networks" 2017 IEEE 13th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob 2017).

The IEEE802.15.4e standard for low power wireless sensor networks defines a new mode called Time Slotted Channel Hopping (TSCH) as Medium Access Control (MAC). TSCH allows highly efficient deterministic time-frequency schedules that are built and maintained by the 6TiSCH operation sublayer (6top). In this paper, we propose a solution to limit the allocation of identical cells to co-located pair of nodes by distributed TSCH scheduling algorithms. It consists of making nodes able to overhear past cell negotiations exchanged in shared cells by their neighbors and prevent the nodes from reusing already assigned cells in future allocations. Our mechanism has been tested through simulations that show a significant improvement with respect to random scheduling algorithms.


   Master 1 cloud computing and services: RSP (TP) Architectures, Protocoles et Administration des Réseaux

   Master 1 Sécurité, Système, Réseau (SSR): DS (TD) Distributed Systems

   Volunteering Secion Leader, Code in place, Stanford University: Course + TD Python

Contact Me


Rennes, France


+33 (0)2 998 47 327