Forking and pull requests have been widely used in open-source communities as a uniform development and contribution mechanism, giving developers the flexibility to modify their own fork without affecting others before attempting to contribute back. However, not all projects use forks efficiently; many experience lost and duplicate contributions and fragmented communities. In this paper, we explore how open-source projects on GitHub differ with regard to forking inefficiencies. First, we observed that different communities experience these inefficiencies to widely different degrees and interviewed practitioners to understand why. Then, using multiple regression modeling, we analyzed which context factors correlate with fewer inefficiencies.We found that better modularity and centralized management are associated with more contributions and a higher fraction of accepted pull requests, suggesting specific best practices that project maintainers can adopt to reduce forking-related inefficiencies in their communities.
more »
« less
Forking and dividing in fields with several orderings and valuations
We consider existentially closed fields with several orderings, valuations, and p-valuations. We show that these structures are NTP2 of finite burden, but usually have the independence property. Moreover, forking agrees with dividing, and forking can be characterized in terms of forking in ACVF, RCF, and pCF.
more »
« less
- Award ID(s):
- 1803120
- PAR ID:
- 10313373
- Date Published:
- Journal Name:
- Journal of Mathematical Logic
- ISSN:
- 0219-0613
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)The binary-forking model is a parallel computation model, formally defined by Blelloch et al., in which a thread can fork a concurrent child thread, recursively and asynchronously. The model incurs a cost of Theta(log n) to spawn or synchronize n tasks or threads. The binary-forking model realistically captures the performance of parallel algorithms implemented using modern multithreaded programming languages on multicore shared-memory machines. In contrast, the widely studied theoretical PRAM model does not consider the cost of spawning and synchronizing threads, and as a result, algorithms achieving optimal performance bounds in the PRAM model may not be optimal in the binary-forking model. Often, algorithms need to be redesigned to achieve optimal performance bounds in the binary-forking model and the non-constant synchronization cost makes the task challenging. In this paper, we show that in the binary-forking model we can achieve optimal or near-optimal span with negligible or no asymptotic blowup in work for comparison-based sorting, Strassen's matrix multiplication (MM), and the Fast Fourier Transform (FFT). Our major results are as follows: (1) A randomized comparison-based sorting algorithm with optimal O(log n) span and O(nlog n) work, both w.h.p. in n. (2) An optimal O(log n) span algorithm for Strassen's matrix multiplication (MM) with only a loglog n - factor blow-up in work as well as a near-optimal O(log n loglog log n) span algorithm with no asymptotic blow-up in work. (3) A near-optimal O(log n logloglog n) span Fast Fourier Transform (FFT) algorithm with less than a log n-factor blow-up in work for all practical values of n (i.e., n le 10 ^10,000 ).more » « less
-
The binary-forking model is a parallel computation model, formally defined by Blelloch et al., in which a thread can fork a concurrent child thread, recursively and asynchronously. The model incurs a cost of Theta(log n) to spawn or synchronize n tasks or threads. The binary-forking model realistically captures the performance of parallel algorithms implemented using modern multithreaded programming languages on multicore shared-memory machines. In contrast, the widely studied theoretical PRAM model does not consider the cost of spawning and synchronizing threads, and as a result, algorithms achieving optimal performance bounds in the PRAM model may not be optimal in the binary-forking model. Often, algorithms need to be redesigned to achieve optimal performance bounds in the binary-forking model and the non-constant synchronization cost makes the task challenging. In this paper, we show that in the binary-forking model we can achieve optimal or near-optimal span with negligible or no asymptotic blowup in work for comparison-based sorting, Strassen's matrix multiplication (MM), and the Fast Fourier Transform (FFT). Our major results are as follows: (1) A randomized comparison-based sorting algorithm with optimal O(log n) span and O(nlog n) work, both w.h.p. in n. (2) An optimal O(log n) span algorithm for Strassen's matrix multiplication (MM) with only a loglog n - factor blow-up in work as well as a near-optimal O(log n loglog log n) span algorithm with no asymptotic blow-up in work. (3) A near-optimal O(log n logloglog n) span Fast Fourier Transform (FFT) algorithm with less than a log n-factor blow-up in work for all practical values of n (i.e., n le 10 ^10,000)more » « less
-
In this paper, it is shown that if p is a complete type of Lascar rank at least 2, in the theory of differentially closed fields of characteristic zero, then there exists a pair of realisations a, b such that p has a nonalgebraic forking extension over a, b. Moreover, if A is contained in the field of constants then p already has a nonalgebraic forking extension over a. The results are also formulated in a more general setting.more » « less
-
Censor-Hillel, Keren; Grandoni, Fabrizio; Ouaknine, Joel; Puppis, Gabriele (Ed.){"Abstract":["For a set M of m elements, we define a decreasing chain of classes of normalized monotone-increasing valuation functions from 2^M to ℝ_{≥ 0}, parameterized by an integer q ∈ [2,m]. For a given q, we refer to the class as q-partitioning. A valuation function is subadditive if and only if it is 2-partitioning, and fractionally subadditive if and only if it is m-partitioning. Thus, our chain establishes an interpolation between subadditive and fractionally subadditive valuations. We show that this interpolation is smooth (q-partitioning valuations are "nearly" (q-1)-partitioning in a precise sense, Theorem 6), interpretable (the definition arises by analyzing the core of a cost-sharing game, à la the Bondareva-Shapley Theorem for fractionally subadditive valuations, Section 3.1), and non-trivial (the class of q-partitioning valuations is distinct for all q, Proposition 3).\r\nFor domains where provable separations exist between subadditive and fractionally subadditive, we interpolate the stronger guarantees achievable for fractionally subadditive valuations to all q ∈ {2,…, m}. Two highlights are the following:\r\n1) An Ω ((log log q)/(log log m))-competitive posted price mechanism for q-partitioning valuations. Note that this matches asymptotically the state-of-the-art for both subadditive (q = 2) [Paul Dütting et al., 2020], and fractionally subadditive (q = m) [Feldman et al., 2015]. \r\n2) Two upper-tail concentration inequalities on 1-Lipschitz, q-partitioning valuations over independent items. One extends the state-of-the-art for q = m to q < m, the other improves the state-of-the-art for q = 2 for q > 2. Our concentration inequalities imply several corollaries that interpolate between subadditive and fractionally subadditive, for example: 𝔼[v(S)] ≤ (1 + 1/log q)Median[v(S)] + O(log q). To prove this, we develop a new isoperimetric inequality using Talagrand’s method of control by q points, which may be of independent interest.\r\nWe also discuss other probabilistic inequalities and game-theoretic applications of q-partitioning valuations, and connections to subadditive MPH-k valuations [Tomer Ezra et al., 2019]."]}more » « less
An official website of the United States government

