JavaScript has seen an increase in popularity in the last few years, both in the browser as well as on other platforms such as Node.js. However, the tools to help developers reason about JavaScript code remain fairly barebone in comparison with tooling for static languages such as Java. These tools are difficult to implement for JavaScript because of the dynamic nature of the language. Feldthaus et al. describe the difficulty of static analysis for JavaScript. They identify efficient construction of call graphs as a key requirement to improve development tools. JavaScript call graphs are computationally expensive to build and existing solutions do not scale enough to be useful in an interactive environment. The authors present a scalable field-based flow analysis for constructing call graphs. This analysis, while in principle unsound, is said to produce highly accurate results in practice. Two different static algorithms are proposed: a pessimistic and an optimistic variant. The pessimistic algorithm only reasons about interprocedural flow in the case of one-shot closures like "(function( ¯ x) { ... })( ¯ e)". The optimistic algorithm reasons about all interprocedural flow but may resolve callback invocation imprecisely. The accuracy of the static algorithms is measured by comparing the static call graphs with dynamic call graphs recorded by running an instrumented version of the program. In this master research project these algorithms are replicated in Rascal, which is a metaprogramming language developed at the CWI. The goal is to find out if the static call graph algorithms recreated from the specification in the original article produce equally accurate results as the ones from the original research. The algorithms are run on the same source programs as in the original research, though different versions are sometimes used as the versions used in the original research are unknown. Furthermore, two of the programs analyzed in the original research could not be analyzed completely, to compensate for this fact two other programs have been analyzed. The static call graphs are compared to the dynamic call graphs to calculate the accuracy of the algorithms as is done in the original research. The call graphs created by the replicated algorithms were similarly accurate as the ones from the original paper if the accuracy is measuring by averaging these numbers over call sites. However, calculating the accuracy in this way rather than comparing the entire call graph edges can cloud differences between the static and dynamic call graph and increase precision. When the entire call graph edges are compared rather than call graph edges, the precision is in some cases considerably lower. Imprecision in the call graphs has the same causes as in the original research: most of it is a result of the fact that the algorithm merges all equally named properties into equal call graph vertices. This approach is chosen to speed up the algorithm by reducing the size of the total flow graph but has a negative effect on accuracy. The effect appears smaller when comparing call graph targets with each other rather than call graph edges. After inspecting the results of the call site averaging comparison it seems they point in favor of the claim made in the original paper that both algorithms produce accurate call graph in practice. When the call graph edges are compared instead however these numbers are often less favorable. Comparing the call graph edges with each other provides a more realistic view of how similar the static and dynamic call graphs are in their entirety. In the original paper they already describe in the future work section that the algorithm’s accuracy could be increased by taking dynamic property access and the tracking of objects other than functions into account. When comparing call graph edges the need for increasing the accuracy mentioned in the original paper becomes even more relevant.