paper-lvars-lattice-based-structures
https://www.cs.indiana.edu/~lkuper/papers/lvars-fhpc13.pdf
It appears that what they are driving at is that anything can read from the shared state, but to write to it, you may only append to it.
paper-lvars-lattice-based-structures#data-structures-grow-monotonically1 2"Specifically, we are concerned with models where shared data structures grow monotonically—by publishing information, but never invalidating it." paper-lvars-lattice-based-structures#data-structures-grow-monotonically1 2
paper-lvars-lattice-based-structures#monotonic-supports-diverse-parallel-programming-models1 2"Because state modifications that only add information and never destroy it can be structured to commute with one another and thereby avoid race conditions, it stands to reason that diverse deterministic parallel programming models would leverage the principle of monotonicity." paper-lvars-lattice-based-structures#monotonic-supports-diverse-parallel-programming-models1 2
When talking about the Haskell program on page 3, it seems to say that connected-component discovery is parallel, but then before proceeding to the next level (where connected-component discovery can happen in parallel again) it has to wait for all members of the output set to become available. In other words, this inhibits pipelining.
"Even though connected-component discovery is parallel, members of the output set do not become available to other computations until component discovery is finished, limiting parallelism"
I don't understand this problem TODO: " separating producers and consumers when expressing producer-consumer computations using pure programming with futures"
Page 4
"(it is possible for) two subcomputations to independently update an LVar, and then deterministically merge the results by taking the lub of the resulting two states"
References
quora-post-what-does-set-theory-equation-mean
Questions
paper-lvars-lattice-based-structures#commute1
What does commute with one another mean in:
"In this paper, we are concerned with an alternative approach:
allowing data to be shared, but limiting the operations that can be
performed on it to only those operations that commute with one
another and thus can tolerate nondeterministic thread interleavings."
paper-lvars-lattice-based-structures#commute1
- arithmetic right shift (bit manipulation)
- fill new values with the value of the sign bit
- cache locality
- channel histories
- collision-chain
- In a hash-table, the elements in a single bucket that must be iterated through.
- hash-table
- data-flow languages
- TODO (what are these?)
- Like StreamIt
- deterministic parallelism
- deterministic-by-construction
- disjoint
- term-downheaping
- Moving the root node down until a term-semi-heap has been returned to a heap.
- free store
- like the heap of dynamic memory allocation
- futures
- grow-factor
- How much to grow dynamic data structures by when they get close to their limits
- inflationary
- TODO - "An LVar can take on any sequence of states from D, so long as that sequence respects the partial order—that is, updates to the LVar (made via the put operation) are inflationary with respect to v" (from paper-lvars-lattice-based-structures page 4)
- IVars
- lattice
- TODO
- Virtually any data structure to which information is added gradually can be represented as a lattice
- level-synchronized breadth-first traversal
- popular strategy for parallel breadth-first graph traversal
- load-factor
- In a hash table, is formally defined as α = n / N, i.e. the ratio of used to total buckets
- logical right shift (bit manipulation)
- puts zeros in the new values
- lub - least upper bound
- LVars
- a new model for deterministic-by-construction parallel programming that generalizes existing single-assignment models (like Ivars to allow multiple assignments that are monotonically increasing with respect to a user-specified lattice.
- named with an L because the states an LVar can take on are elements of a user-specified lattice.
- Unlike IVars, an LVar may have an arbitrary number of states forming a set D, which is partially ordered (by something I don't understand TODO)
- monotonic data structures
- information is only added and never removed
- common theme of both data-flow and single-assignment models
- term-page-fault
- partially ordered set
- pipeline parallelism
- term-semi-heap
- a heap which has had its root node replaced. Can be returned to a heap by term-downheaping.
- re-hashing
- when you change the number of buckets in a hash and have to re-distribute the values.
- it is expensive, so consider having the #term-grow-factor be large, so you don't have to re-hash frequently.
- single-assignment languages
- single-assignment memory cells
- shared data stores that increase monotonically in IVars-based systems (like CnC and monad-par)
- state space
- structure padding
- tagged-union
- A value of an enum type contains information about which variant it is, in addition to any data associated with that variant. This is sometimes referred to as a 'tagged union', since the data includes a 'tag' indicating what type it is.
- tame sharing
- "threshold" reads
- tripwire
- working sets
Referring Pages
musings-on-ui
talk-silence-is-golden
data-architecture-glossary
monotonic-data-growth