skip to main content


Title: JuMP 1.0: recent improvements to a modeling language for mathematical optimization
JuMP is an algebraic modeling language embedded in the Julia programming language. JuMP allows users to model optimization problems of a variety of kinds, including linear programming, integer programming, conic optimization, semidefinite programming, and nonlinear programming, and handles the low-level details of communicating with solvers. After nearly 10 years in development, JuMP 1.0 was released in March, 2022. In this short communication, we highlight the improvements to JuMP from recent releases up to and including 1.0.  more » « less
Award ID(s):
1835443
PAR ID:
10490266
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Springer
Date Published:
Journal Name:
Mathematical Programming Computation
Volume:
15
Issue:
3
ISSN:
1867-2949
Page Range / eLocation ID:
581 to 589
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We introduce MathOptInterface, an abstract data structure for representing mathematical optimization problems based on combining predefined functions and sets. MathOptInterface is significantly more general than existing data structures in the literature, encompassing, for example, a spectrum of problems classes from integer programming with indicator constraints to bilinear semidefinite programming. We also outline an automated rewriting system between equivalent formulations of a constraint. MathOptInterface has been implemented in practice, forming the foundation of a recent rewrite of JuMP, an open-source algebraic modeling language in the Julia language. The regularity of the MathOptInterface representation leads naturally to a general file format for mathematical optimization we call MathOptFormat. In addition, the automated rewriting system provides modeling power to users while making it easy to connect new solvers to JuMP. Summary of Contribution: This paper describes a new abstract data structure for representing mathematical optimization models with a corresponding file format and automatic transformation system. The advances are useful for algebraic modeling languages, allowing practitioners to model problems more naturally and more generally than before. 
    more » « less
  2. The programming language Julia is designed to solve the 'two language problem', where developers who write scientific software can achieve desired performance, without sacrificing productivity. Since its inception in 2012, developers who have been using other programming languages have transitioned to Julia. A systematic investigation of the questions that developers ask about Julia can help in understanding the challenges that developers face while using Julia. Such understanding can be helpful (i) for toolsmiths who can construct tools so that developers can maximize their experience of using Julia, and (ii) for Julia language maintainers with empirical evidence on areas to improve the language as well as the Julia ecosystem. We conduct an empirical study with 3,093 Stack Overflow posts where we identify 13 categories of questions related to Julia-based software development. We observe developers to ask about a diverse set of topics, such as GC, Julia's garbage collector, JuMP, a domain-specific language constructed using Julia, and symbols, a metaprogramming utility in Julia. Based on our emerging results, we recommend enhancing support for developers with Julia-based tools and techniques for cross language transfer, type-related assistance, and package resolution. 
    more » « less
  3. Emerging research in computer graphics, inverse problems, and machine learning requires us to differentiate and optimize parametric discontinuities. These discontinuities appear in object boundaries, occlusion, contact, and sudden change over time. In many domains, such as rendering and physics simulation, we differentiate the parameters of models that are expressed as integrals over discontinuous functions. Ignoring the discontinuities during differentiation often has a significant impact on the optimization process. Previous approaches either apply specialized hand-derived solutions, smooth out the discontinuities, or rely on incorrect automatic differentiation. We propose a systematic approach to differentiating integrals with discontinuous integrands, by developing a new differentiable programming language. We introduce integration as a language primitive and account for the Dirac delta contribution from differentiating parametric discontinuities in the integrand. We formally define the language semantics and prove the correctness and closure under the differentiation, allowing the generation of gradients and higher-order derivatives. We also build a system, Teg, implementing these semantics. Our approach is widely applicable to a variety of tasks, including image stylization, fitting shader parameters, trajectory optimization, and optimizing physical designs. 
    more » « less
  4. We present DiffTaichi, a new differentiable programming language tailored for building high-performance differentiable physical simulators. Based on an imperative programming language, DiffTaichi generates gradients of simulation steps using source code transformations that preserve arithmetic intensity and parallelism. A light-weight tape is used to record the whole simulation program structure and replay the gradient kernels in a reversed order, for end-to-end backpropagation. We demonstrate the performance and productivity of our language in gradient-based learning and optimization tasks on 10 different physical simulators. For example, a differentiable elastic object simulator written in our language is 4.2x shorter than the hand-engineered CUDA version yet runs as fast, and is 188x faster than the TensorFlow implementation. Using our differentiable programs, neural network controllers are typically optimized within only tens of iterations. 
    more » « less
  5. Abstract

    Transformer-based large language models are making significant strides in various fields, such as natural language processing1–5, biology6,7, chemistry8–10and computer programming11,12. Here, we show the development and capabilities of Coscientist, an artificial intelligence system driven by GPT-4 that autonomously designs, plans and performs complex experiments by incorporating large language models empowered by tools such as internet and documentation search, code execution and experimental automation. Coscientist showcases its potential for accelerating research across six diverse tasks, including the successful reaction optimization of palladium-catalysed cross-couplings, while exhibiting advanced capabilities for (semi-)autonomous experimental design and execution. Our findings demonstrate the versatility, efficacy and explainability of artificial intelligence systems like Coscientist in advancing research.

     
    more » « less