In an Inductive Program Synthesis (IPS) problem, a set of input/output data examples are given and the task is to find a program which can produce the desired outputs for the given inputs. Recently, researchers from the University of Cambridge and Microsoft Research have submitted a paper to the 5th International Conference on Learning Representations (ICLR'17) on DeepCoder, a new approach to IPS, i.e., to the automatic synthesis of programs. This new technology has goals similar to our work on program synthesis, but achieves them with entirely different means.
Algorithm Synthesis with DeepCoder
The authors of "DeepCoder: Learning to Write Programs" employ deep learning and artificial neural networks (ANNs). First, a domain-specific programming language is chosen. Then, an ANN is trained on a variety of example programs. This ANN is then used to guide the search in the space of possible programs when trying to solve a new programming task. Basically, this search can exhaustively explore a certain subspace of the space of all possible programs. The subspace could be "all programs of at most 5 instructions." The search could then be a depth-first search (DFS) up to depth 5. Whenever the DFS needs to choose the next instruction to add to its current program, it could pick the one the ANN deems to most likely to be the right choice. Since it would still explore the subspace exhaustively and the ANN just provides the order of the exploration, it would still find the right program (if one exists), even if the ANN would be faulty. By using this concept, the authors achieve a considerable speedup on existing technologies.
Program 8 |
---|
s <- [int] |
Input Example |
Input: [1 2 4 5 7] |
Description |
Vivian loves rearranging things. Most of all, when she sees a row of heaps, she wants to make sure that each heap has more items than the one to its left. She is also obsessed with efficiency, so always moves the least possible number of items. Her dad really dislikes if she changes the size of heaps, so she only moves single items between them, making sure that the set of sizes of the heaps is the same as at the start; they are only in a different order. When you come in, you see heaps of sizes (of course, sizes strictly monotonically increasing) |
Algorithm Synthesis with Genetic Programming
For a long time, researchers are trying to find ways to automatically synthesize programs. Genetic Programming (GP) follows a different path than methods from traditional Artificial Intelligence or Machine Learning as the one proposed in the above paper. Program synthesis is instead treated as an optimization problem with a variable-length representation and we try to let an optimization algorithm (usually an Evolutionary Algorithm) solve it for us. I myself have worked on this topic for several years. GP and the approach in the above-mentioned new work have different advantages and disadvantages. For instance, GP cannot guarantee to find a program even if one exists, since it is a heuristic and it may run into a local optimum which cannot solve the problem. GP may also be slower on small problems. Getting GP to work requires quite a lot of work, a deep understanding of the nature of optimization and a good representation.
Then again, the programs synthesized by deep learning, in the present paper, were limited to 5 instructions and contained no sophisticated control flow. In our paper "Evolving Exact Integer Algorithms with Genetic Programming" from 2014, we synthesized, well, exact integer algorithms which explicitly use variables and contain nested loops and if-then-else decisions. We synthesized programs that could compute the greatest common divider of two numbers, the square root (with integer arithmetic), or the least significant bit of a number (only with arithmetic).
Raw Program (not in paper) produced by setup CL-SM for problem srb in ECJ tree syntax |
---|
(/ (* 1 (+ m2 1)) (- (CL-SM (/ (= m1 m0) (/ m2 m2)) (; (= m2 (- (+ 1 m2) (/ m2 (/ (= m1 m0) (+ 0 (+ 1 m2)))))) (= m2 (- (+ 1 m2) (/ m2 (/ (= m1 m0) (+ 0 (+ 1 m2)))))))) (= m2 (-(+ (/ m2 m2) m2) (/ m2 (/ (= m1 m0) (+ 0 (/ m1 m2)))))))) |
Corresponding Pseudo-C or Pseudo-Java Code |
static int srb(int m0) { |
Description |
In the square root task, the goal is to find a program which can compute the square root of a number. For the original task, there were 100 training cases and the inputs were all square numbers. In the more complicated task srb illustrated here, the numbers 0…99 were used as training cases. This is harder, as we used training cases without integer square root and the integer arithmetic available to GP needed to synthesize a program which can compute the truncated square root. (e.g., input 8 should lead to output 2 while input 9 leads to output 3) |
Summary and Outlook
In terms of the complexity of what we can generate, I think GP is still ahead. However, this does not need to remain like that, especially if we consider that there are 30 years of research in GP while the paper by Balog et al. proposes a very new concept. Moreover, there is no reason whatsoever to not combine both technologies, I think it would be possible to plug the ANNs not just into an exhaustive program search but also into the search operators of GP. This could not just improve the success rate of GP very significantly, it could also speed up the process. On the other hand, GP may also be used as an operator inside the program search guided by the ANN. There is no reason to not collect some candidate programs during the search, and, from time to time, use them as starting population of a GP process. If GP succeeds, good. If not then the original search could be continued. This could help to tackle even larger search spaces.
Finally, both concepts are still many years away from being able to cope with actual programmers. First of all, we need a very restrictive domain-specific language. It is not going to work with Java, C, and that alike anytime soon. Second, the programs we get work correctly at the provided examples, at best. Neither GP nor deep learning can provide a proof that the programs are actually correct and both technologies are likely to exploit any border cases and even a tiny incompleteness of the example data. Different from human programmers, they have no understanding of the actual problems and intentions. This may (and usually will) result in programs which are very hard to read for a human interpreter, i.e., which are hard to validate without any automatic tool. Finally, both technologies are still limited to relatively small tasks. Although I think that our 2014 paper pushes them quite a bit, we are still far from writing actual programs.
The following papers may be an interesting further read:
- Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. DeepCoder: Learning to Write Programs. Submitted to the 5th International Conference on Learning Representations (ICLR'17). arXiv:1611.01989
- Thomas Weise, Mingxu Wan, Ke Tang, and Xin Yao. Evolving Exact Integer Algorithms with Genetic Programming. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC'14), Proceedings of the 2014 World Congress on Computational Intelligence (WCCI'14), pages 1816-1823, Beijing, China: Beijing International Convention Center (BICC), July 6-11, 2014. ISBN: 978-1-4799-1488-3, Los Alamitos, CA, USA: IEEE Computer Society Press.
doi:10.1109/CEC.2014.6900292 / pdf / slides - Thomas Weise and Ke Tang. Evolving Distributed Algorithms with Genetic Programming. IEEE Transactions on Evolutionary Computation (IEEE-EC), 16(2):242-265, April 2012.
Received CIS Publication Spotlight in the August 2012 issue of the IEEE Computational Intelligence Magazine (CIM).
doi:10.1109/TEVC.2011.2112666 / pdf