I was given the opportunity to present the SimGrid platform to the INGI Fall 2012 Doctoral School Day in Cloud Computing. It was good for the project because it gathered many interesting people that conduct research on clouds and distributed systems all around Europe. The framework was well received, and it's rather possible that some of the attendees start using SimGrid in the next. Another advantage is that the university of Louvain has an open professor position on which I will apply in spring.
This time, I had a rather decent amount of slides to present, very adapted to the time slot I had. 45 minutes for 20 slides that are only "rather loaded", that's very reasonable. That's much better than my performance to P2P 2009 where I had like 10 minutes for 20 uberbloated slides. Brr, that P2P'09 was such an epic fail for me. I guess that this is the kind of errors that you must experience to improve your presentation style.
Another difference is that I had a much more narrow editorial line. My goal was to highlight the "why" aspect of SimGrid. I motivated the simulation aspect correctly (not "that's how to do good science" but "this is the fastest path from idea to papers") and pinpointed the differences between SimGrid and the other simulators without undue courtesy (the comfort to the user must be backed by the tool soundness).
I got a bunch of good questions in feedback. I was first challenged about the fact that parallel simulation only lead to a marginal speedup (like 20% only with 24 cores). This question was interesting because it gave me the occasion to speak about the originality of our parallelization schema for a discrete-event simulator. I didn't intend to speak about it first, but it corroborated the fact that even a dumb simulator of distributed systems need to be a serious tool, from the software and HPC point of view.
I was also asked about the next big challenges of the framework, and answered that the simterpose project, allowing to run arbitrary applications on top of the simulator using virtualization techniques. I intended to also speak of model-checking and semantic evaluation that is another current hot spot for us, but the next question came before I could speak of it.
An attendee was wondering about the motivation to model and simulate the application nowadays, where the computational power is no more than a facility. Instead, he advocated to simply run the tested application in the cloud. That was an obvious flaw of this presentation: It undermines the importance of the combining every methodologies in a complete experimental workflow. Of course, simulation is not the universal solution and off course you still need to run your application for real. But that's a bit stupid to burn EC2 money to just discover that your design is obviously flawed.
Two potential users asked how hard it would be to add the feature they need. The first one wanted to add a new network model of multi-path TCP connection that would not be Max-Min fair. That would be very possible, as SimGrid is intended to be multi-model so as to allow the model realism in real settings (even if not trivial as this part of the code was not really written with the idea of being modified by others). The second one was wondering whether the development of a RMI backend would be possible, in order to simulate unchanged RMI code. That's an interesting idea and it would not be very difficult. I guess I'll put an internship on this soon. I will mail these people soon to see what can be done on each point.
I was also pointed out that the examples on the PeerSim webpage are notoriously bad with regard to simulation performance. I'm sorry to hear that. I should probably contact the authors of this framework to give them an opportunity to provide me with an efficient protocol implementation to which I could compare fairly.
Overall, I had a lot of fun giving this presentation. It was the perfect occasion to given my true feeling on these questions. I particularly appreciated the question on the real difficulty in the day-to-day development of SimGrid. I answered that the multiplication of buzzwords in our domain was a bit bothersome, forcing us to add user interfaces to grids yesterday, clouds today and probably exascale tomorrow while the tested algorithms remain the same. Several seniors of the assistance smiled to this answer, clearly agreeing to this idea. I tell you, this presentation was fun and was a success