Compiling ActionScript In The Enterprise

来源:互联网 发布:许式伟 来阿里云 编辑:程序博客网 时间:2024/06/04 18:25

(http://blog.joa-ebert.com/2010/03/06/compiling-actionscript-in-the-enterprise/)

 

 

If you ask me if ActionScript is ready for the enterprise then this question is hard to answer. Every fairly huge codebase requires some maintenance. You will have to figure out a strategy to identify modules and separate them. You will also have to handle the dependencies between modules. Last but not least you have to compile those.

An important aspect is that when I am talking about modules I am not talking about “Flex Modules” but the more generic term for a part of a project.

First of all you have to make sure to fulfill some requirements. You can not assume that every developer is working on the same platform using the same IDE with the same settings. But you must assume that your project will not be built from an IDE. You must also assure that the dependencies between projects are up to date across different platforms and IDEs. Once you have figured out how to do that you might want to think about a strategy to compile your modules.

Today I want to explore the idea of parallel compilation across a cluster of machines to yield optimal compile speed for ActionScript projects. First of all, you can not assume that a developer can always keep all modules up to date. Depending on the dependency graph of your projects compilation migh take very long. For instance we have 103 modules at Hobnox. This means we have 103 different compile targets with different dependencies, each yielding its own SWF/SWC file. We also have a “standard library” that is included in nearly every project. Changing this library will require the developer to recompile all 103 modules to update them.

To illustrate this example I want to explain what we went through.

  1. We used FDT to develop our project. Since FDT is Eclipse based every module was represented as a separate project with dependencies to other projects. FDT however compiles your project only once you launch it. And it does not do this automatically for your dependency graph. As a result you have to manually compile 103 modules in the correct order. This would not be the problem if FDT’s live error highlightning and code analysis would not work on a SWC basis. But since the actual source-code between projects is not analyzed we had to launch all 103 modules to propagate a possible error or change. This was for us the point when we had to make a switch.
  2. Ant was our second idea to build our FDT projects automatically. However we had to keep module dependencies up to date in all the different Ant files because the FCSH requires that you include external libraries of a library also in a successor of another library. This means if A requires B and C requires A. Then C requires also B. So we had to keep 103 Ant targets up to date for each change.
  3. A custom build. We started working with a custom build tool I have developed in Scala. This build tool was smart enough to figure out which modules have to be recompiled without launching the ActionScript compiler. It also figured out which projects could be compiled in parallel since the ActionScript compiler is written in a way that IO is not the only bottleneck. We had an enormous improvement in compile time and usability for the developer since module dependencies where handled automatically.
  4. We switched from FDT to IntelliJ IDEA since it offered us more features that we needed. IntelliJ came also with a much better strategy to compile modules. Basically it had the feature we always missed for FDT to automatically include libraries and to compile modules in the correct order automatically. However the guys at JetBrains made a mistake by counting on FCSH. The short story is that FCSH is absolutely not capable of handling a large codebase. What happened to us was that when we started a build FCSH would take up as much memory as it could, getting much slower until it finally crashed. Then FCSH started from scratch, consumed the maximum amount of memory again and crashed. This happend until a project was finally compiled but took also more than 15 minutes for a single change we made. This was of course unacceptable. Fortunately JetBrains implemented a feature request from us into IntelliJ IDEA which comes close to our custom build. From now on projects are compiled in parallel without using FCSH and without asking launching the compiler if no change has been made. This way we are able to work again. However only if we compile just a short amount of all modules since the main bottleneck is still the compiler.
  5. A continuous integration server was our last option. This is a custom build server and we are using it until today. After each SVN commit we are building the required modules and their dependencies now automatically on a dedicated server. A change in the standard library takes about four minutes to complete now on a quad core. But the good thing is that we have set our main project to exclude a lot of libraries. The custom build takes care of that and we are loading now modules from the server. That way we can still compile local and include libraries we frequently change. Other libraries are loaded from the server and included at runtime.

So the question is if this is now the ultimate solution. A custom continuous integration server that takes more than 4 minutes to propagate a change after a SVN commit? I do not think so. And here comes Scala into the game again. There is a project available called Swarm. It makes use of Scala’s support for serializable delimited continuations. This means you can pause the state of your current program, send it to a different node in your cluster and continue with the program on that machine. And Swarm makes this task extremely simple. So for now a new research project is to take advantage of this feature. We can basically have one master server like Hudson or TeamCity and write a plugin for it that makes use of Scala Swarm. Swarm could be based on nodes in the office, normal computers and maybe a couple of dedicated servers to compile modules in parallel on multiple nodes. Each node would have to keep the VCS up to date. That way we should be able to compile projects much quicker and have a much better workflow in a large codebase. Besides that if you work with a CI server than you could also have the task of code analysis using FlexPMD for instance done on another machine as well.

I am really looking forward to this project when I have some time to work on it. My goal is to develop a plug-in for TeamCity that we will use internally first. I expect to reduce the time for a full rebuild from about 4min to something like 30sec. This would be a huge benefit. I also hope that we will be able to make that plug-in available to the public.

ShareThis