Problem Set 1: Specific Graphics
The deadlines for this problem set are shown on the course calendar.
The purpose of this problem set is to give you practice with test-first programming and specifications. Given a set of specifications, you will write unit tests that check for compliance with the specifications, and then implement code that meets the specifications. You will also strengthen the specs of some functions, write your own specs for other functions, and test and implement those specs as well.
More progress with Git
Read and do the exercises in Git 2: Disaster Recovery.
Get the code
Ask Didit to create a remote
psets/ps1repository for you on github.mit.edu.Clone the repo. Find the
git clonecommand at the top of your Didit assignment page, copy it entirely, paste it into your terminal, and run it.Run
npm install, then open in Visual Studio Code. See Problem Set 0 for if you need a refresher on how to create, clone, or set up your repository.
Overview
Before we dive in: don’t get stuck on any of the new concepts or mathematical details as you read this overview. You’ll come back to them as you work through the problem set.
In this problem set, you will specify, test, and implement functions for working with two kinds of graphics data. First colors, including:
|
In RGB color, this green is two-thirds of the way from sky blue to bright amber. |
A gradient from light teal to dark indigo with four intermediate RGB color steps. |
Starting from deep pink, we find three more colors evenly spaced around the color wheel using HSL color. |
And then Bézier curves (a kind of parametric curve) including:
To begin the problem set, you will start with simple linear interpolation in 1 dimension, in 3-dimensional color space, and in 2-dimensional Euclidean space. Along the way, you will also add the idea of an easing function that controls the selection of where to interpolate. For example:
In animating with easing above, we’re interpolating points on the curve outside the range t ∈ [0, 1]. We won’t do that for Bézier curves, but it works in other contexts, such as this very simple curve.
By the end of the problem set, you will build up a set of functions that you will use to replicate the animations below… and perhaps create new animations of your own!
Show how alley-oop is constructedIn this looping one-second animation… The circle shape is two Bézier curves with control points: It is translated by the curve with control points: Its RGB color transitions linearly from (128,128,128) to (173,255,47). |
Show how rainbow connection is constructedIn this looping two-second animation… The heart shape is two Bézier curves with control points: Each curve is revealed over time according to an easing function that is the 1-dimensional Bézier curve with control points: And the color of each curve is a rainbow gradient around the color wheel that starts at RGB (39,229,220). |
Steps
Since we are doing iterative test-first programming, your workflow for each function should follow the order specs → tests → implementation, iterating as needed. Make steady progress across functions and down the workflow, in the following pattern:
Read carefully the provided specs for functions you must implement. The specs use type aliases to give names to some data structures we don’t want to re-spell every place they are used:
ColorandPoint.lerp(..)performs linear interpolation between two numbers (“lerp” is the conventional name for a function that does linear interpolation)lerpColor(..)performs linear interpolation between RGB colorsmakeGradient(..)creates gradients between colors, represented as Maps that contain a Color for each step in the gradientmakePalette(..)generates palettes of colors based on a starting color by changing its hueinterpolate(..)performs linear interpolation with an easing functionbezierPath(..)finds points along a Bézier curve, usingbezierInterpolate(..)as a helper function
Problems 2(a—c) require stronger vs. weaker specs from reading 5.
To continue working if you have not yet done reading 5, skip to: 2(d) writing tests for
lerpColor, 3(a) tests formakeGradient, and 4(a) tests formakePalette.Then circle back before implementing those functions and start with specs, tests, and code for
lerpWeak.In
src/lerp.ts, write a weak spec forlerpWeak(..)that makes it understandable, appropriate for implementinglerpColor(..), and as weak as possible. Useful forlerpColor, but otherwise unnecessarily weak!Test your spec for
lerpWeakin the first section oftest/lerp.test.ts. This is a very simple function: your test suite should be very small. Add/commit/push.Implement
lerpWeak. Review your work: the spec should be uncomfortably weak, but the quality of documentation, tests, and code should be excellent. Add/commit/push.Test
lerpColorin the first section oftest/colors.test.ts. You may not change its spec, and these tests will be run against other implementations oflerpColorthat are not your own. This is a very simple function: your test suite should be very small. Add/commit/push.Notice that the
lerp.tsmodule exportslerpusing thelerpWeakspec and implementation. ImplementlerpColorinsrc/colors.tsby callinglerpand assuming your weak spec. Add/commit/push.
Didit check-in. If you review your Didit build results now, you should see two relevant public tests passing:
- lerpColor test suite (running your tests against a broken staff implementation) … that always returns the first color — this test passes if, when Didit runs your tests against our broken code, at least one of your tests fails
- lerpColor (running original test from pset starting code against your implementation) … should pass the example test — this test passes if your implementation satisfies the example test we provided in
colors.test.tsthat looks for “25% gray between black and 50% gray.”
… but you should also see the other running your tests against a broken staff implementation tests passing, even though you haven’t worked on those test suites. That’s because those broken staff implementations are so broken that the example tests we provided find the bugs.
There will be other, hidden tests of both your test suite and your implementation for
lerpColoras described in submitting and grading.For
makeGradient…Write tests for
makeGradient(..). You may not change its spec, and these tests will be run against other implementations ofmakeGradientthat are not your own. Add/commit/push.To remove ambiguity,
makePaletteshould specify:@returns an array of exactly all the distinct RGB colors converted (and rounded, half up) from HSL colors that have: …
The specs are corrected in pset repos created after Fri Feb 14. If you created your repo earlier, please make these corrections in your files.
For
makePalette…Write tests for
makePalette(..). You may not change its spec, and these tests will be run against other implementations ofmakePalettethat are not your own. Add/commit/push.Note that we have provided
hslToRgbandrgbToHslconversion functions inutils.ts.
For
lerp(again) andinterpolate…At the top of the function body, write a one-sentence explanation: why can we not use
lerpwith yourlerpWeakspec to implementinterpolate(..)?In
src/lerp.ts, write a strong spec forlerpStrongthat makes it applicable in all the places you’ve used it so far, as well as for implementinginterpolate. A useful spec!Analyze the relationship between your
lerpWeakandlerpStrongspecs in three sentences by answering the questions near the bottom oflerp.ts. It may be the case that yourlerpWeakis weaker thanlerpStrong… or it may not!Test your spec for
lerpStrongin the last section oftest/lerp.test.ts. This remains a very simple function: your test suite should be very small.Update the line
export const lerp = lerpWeakso it instead useslerpStrong.Write tests for
interpolateintest/colors.test.ts. You may not change its spec, and these tests will be run against other implementations ofinterpolatethat are not your own. Add/commit/push.Implement
interpolateby callinglerp, which now guarantees your strong spec. Add/commit/push.
For
bezierInterpolateandbezierPath…Strengthen the provided specs of
bezierInterpolate(..)andbezierPath(..)insrc/curves.ts, taking care to make them only stronger than what is provided. Then write tests for them intest/curves.test.ts:bezierInterpolatemust be stronger than what is provided, generally useful, and useful as a helper function to implementbezierPath. RenamerenameMeand weaken its provided precondition; and determine the function’s postcondition.The spec of
bezierPathmust be stronger than what is provided. RenamerenameMeand weaken its provided precondition; and strengthen the function’s postcondition.See additional advice below about strengthening the specs of
bezierInterpolateandbezierPath.
-
Iterative development recommendation
We hope this last part of the problem set is fun! But you should first iterate on all the previous problems, refining your specs, tests, and implementations, before starting problem 7.
Alpha grading will emphasize work on those earlier problems.
Review the spec of
animateToFile(..), implemented inlib/animate.ts.It uses additional type aliases because spelling its input type is a mouthful:
Array<Array<{color: [number,number,number], points: Array<{x: number, y: number}>}>>In
src/toolbox.ts, draft the specifications for at most three functions that will make it easy for a client to create animations like the two examples above (alley-oop and rainbow connection) by creating data structures that can be passed toanimateToFile.See advice below about designing your toolbox of animation functions.
If you have existing functions in
utils.tsthat would be useful in the toolbox, you may import and then re-export them. Add/commit/push.
Write tests for your functions in
test/toolbox.test.ts. Remember that the goal of these tests is to find bugs in these functions, not in other code. Add/commit/push.Use your toolbox to implement
handoutExampleOne(..)andhandoutExampleTwo(..). You do not need to write tests for those two functions, and their implementations should be short because they can leverage your well-designed toolbox.Necessary numbers are provided in the code. Click “show how [animation] is constructed” under each figure to see all the details.
As you work, you can use
npm startto runmain.ts, which will callhandoutExampleOneand-Two. Then open (or refresh)example1.html(orexample2.html) in your browser. You can freely modifymain.ts.
Add/commit/push. Your Didit build results should now include links where you can see the example animations! If you’re pushing close to the deadline, keep in mind that Didit feedback is provided only on a best-effort basis.
The next few sections have more advice about these steps:
- What you can and can’t change
- Specifications, including strengthening
bezierInterpolateandbezierPath - Testing
- Implementation
- Python notebook for investigating, visualizing, and testing
- Designing your toolbox of animation functions
Use git add/git commit/git push after every step that changes your code.
Committing frequently – whenever you’ve written some tests, fixed a bug, or added a new feature – is a good way to use version control, and will be a good habit to have for your team projects and as a software engineer in general, so start forming the habit now.
Your git commit history for this problem set should:
- demonstrate test-first programming;
- have frequent small commits;
- include short but descriptive commit messages.
What you can and can’t change
- Don’t change any provided file or folder names.
- Don’t change the function signatures: the functions in
lerp.tsmust use the function signatures that we provided. - Don’t change the function signatures and specifications: the exported functions in
colors.tsmust use the function signatures and the specifications that we provided. - Don’t weaken the specifications: the exported functions in
curves.tsmust have a stronger spec than we provided. - Don’t export anything new: from
lerp.ts,colors.ts, andcurves.ts, only the provided functions may be exported. - Don’t include illegal test cases: the tests you write must respect the specifications that you are testing. Tests against specs we provided will be run against other implementations of those specs.
Aside from these requirements, however, you are free to add new functions and new classes if you wish.
Specifications
Before you start, read the specs carefully, and take notes about what you observe. You can either read the TypeScript source files directly in VS Code, or read the TypeDoc documentation for all functions generated from the source files.
Keep in mind these facts about specifications:
Some specs have preconditions. Recall from the specs reading that when preconditions are violated by the client, the behavior of the function is completely unspecified.
Some specs have underdetermined postconditions. Recall that underdetermined postconditions allow a range of behavior. When you’re implementing such a function, the exact behavior of your function within that range is up to you to decide. When you’re writing a test case for the function, the test must allow for the full range of variation in the behavior of the implementation, because otherwise your test case is not a legal client of the spec as required above.
Exported functions can be used anywhere. These functions are independent modules, which might be called by various parts of a graphics system, not necessarily the code you write. A function implementation must be able to handle all inputs that satisfy the precondition, even if they don’t arise in your use of those functions. And a function may return any output that satisfies its postcondition, even if that doesn’t seem useful in the context you would like to call it.
In this problem set we are working with floating-point numbers, but specifying the numerical precision of calculations, and writing algorithms to implement a specified precision, are both outside the scope of 6.102. Therefore, in all specifications:
Integral
numbervalues are assumed to be safe, betweenMIN_SAFE_INTEGERandMAX_SAFE_INTEGER(and of course they may be constrained further).When the computation of a non-integral
numberis constrained, error of up to tolerance = 0.001 is allowed. You may not use any other definition of floating-point equivalence for any of the functions we require you to specify or implement. If you are asked to strengthen or weaken a spec, you may not do so by changing the tolerance for floating-point error. You can use the providedassertApproxEqualfunction inutils.tsto write tests compatible with this specification.Our goal with these assumptions is to simplify, not complexify, the problem set. If you think properly testing or implementing one of the required specs is difficult for reasons of numerical accuracy, or made more difficult by these assumptions, please ask a question.
Strengthening the specs of bezierInterpolate and bezierPath
To strengthen the spec of bezierInterpolate…
- consider weakening its statically-checked precondition on
renameMe(as well as renaming that parameter) - avoid weakening its provided exceptional-case postcondition
To strengthen the spec of bezierPath…
- consider weakening its statically-checked precondition on
renameMeby using a union type (as well as renaming that parameter) - then consider your postcondition when
undefinedis provided for that parameter, since you cannot add exceptional cases
Testing
You should partition each function’s inputs and outputs, write down your partitions in a testing strategy comment, and choose test cases to cover the partitions.
The function specs and implementations are in the files under src/, and the corresponding Mocha tests are in files under test/.
Separating implementation code from test code is a common practice in development projects.
It makes the implementation code easier to understand, uncluttered by tests, and easier to package up for release.
The test suite for a function may already have example tests in it, which are provided as models. You are recommended to read and then throw away those example tests and write your own.
Your test cases should be chosen using partitioning. This approach is explained in the reading about testing.
Include a comment at the top of each test suite describing your testing strategy. Examples are shown in the reading about testing.
Your test cases should be small and well-chosen. Don’t use a large set of data for each test. Instead, create inputs carefully chosen to test the partition you’re trying to test.
Your tests should find bugs. We will grade your test cases in part by running them against buggy implementations and seeing if your tests catch the bugs. So consider ways an implementation might inadvertently fail to meet the spec, and choose tests that will expose those bugs.
Your tests must be legal clients of the spec. We will also run your test cases against legal, variant implementations that still strictly satisfy the specs, and your test cases should not complain for these good implementations. That means that your test cases can’t make extra assumptions that are only true for your own implementation.
Put each test case in its own
it()function. This will be far more useful than a single large test function, since it pinpoints the problems in the implementation.Run testing coverage. When it’s time to do glass box testing, run your test suites with
npm run coverage.Be careful calling helper functions from testing code. Your test cases in, e.g.,
test/colors.test.ts, must not call a new helper function that you have defined insrc/colors.ts. Remember that your tests will be run against staff implementations of thecolors.tsfunctions, and code in your version of that file will not be available. Put helper functions needed only by your testing code intotest/; and put helper functions needed by both implementation and test code intosrc/utils.ts(discussed in the section below).Again, keep your tests small. Don’t use unreasonable amounts of resources (such as arrays or strings of length
MAX_SAFE_INTEGER). We won’t expect your test suite to catch bugs related to running out of resources; every program fails when it runs out of resources.Use
strictEqualanddeepStrictEqual. For example.passes because it uses the dangerousassert.equal(1, '1')==comparison. Don’t use it, always usestrictEqualfor immutable built-in types.However,
assert.strictEqual([1], [1])fails because it checks that the two arguments refer to the same array instance. To compare data structures, usedeepStrictEqualas shown in the provided example tests.Use
assertApproxEqual. As discussed in the section above, useassertApproxEqualto compare floating-point numbers when error is permitted.Add/commit/push frequently. Whenever you do a nontrivial amount of work – e.g. after writing a testing strategy comment; after choosing some test inputs; after writing Mocha tests and seeing that they pass – you should add/commit/push your work with git.
Implementation
Implement each function, and revise your implementation and your tests until all your tests pass.
Some advice about implementation:
Small helper functions. If you want to write small helper functions for your implementation of a single module, then you can put them in the relevant implementation file alongside the other functions. Don’t export helper functions in
lerp.ts,colors.ts, orcurves.ts, because that changes the spec of those modules in a way that you are not allowed to do. This means you cannot write tests for these helper functions, which is why they must be small. You are relying on your test suite for the public functions of those modules to achieve coverage and find bugs in these small helper functions.Larger helper functions. If you want to write helper functions of any complexity, then you should put them in the
utils.tsfile, and write Mocha tests for them inutils.test.ts. This is also the place to put helper functions that are needed by both implementation code and test code. Inutils.ts, if you export a function calledmyHelper, then incolors.tsandutils.test.tsyou can call the function asutils.myHelper(..).Do not ask us to tell you whether a helper function is small or large. If you are not sure, then the function is large enough that it should have TypeDoc documentation and its own tests. Put the helper function in
utils.ts, give it a clear spec, and test it inutils.test.ts.Don’t call testing code. Don’t put a helper function in the
test/folder if you need to call it from implementation code insrc/. Testing code should be completely detachable from the implementation, so that the implementation can be packaged up separately for deployment. We also detach your testing code when we run staff tests against your implementation. Put helper functions needed by implementation code, or by both test and implementation code, intosrc/utils.ts.Eliminate warnings. Revise your code to address all the yellow-underlined warnings shown in VS Code. These warnings should include both TypeScript compiler warnings and ESLint warnings, because ESLint is enabled for this problem set.
Check testing coverage. Do glass box testing and revise your tests until you have satisfactory code coverage.
Review your own code. Read your code critically with an eye to making it as SFB, ETU, and RFC as possible.
Test frequently. Rerun your tests whenever you make a change to your code.
Use Mocha’s
-fargument to debug a single test. To run a subset of test cases from your whole test suite, usenpm test -- -f 'pattern'. Only tests whoseit()description contains the stringpatternwill be run. For example, in the provided starting code, the commandnpm test -- -f 'lerpColor covers...'runs only the example test forlerpColor. This is useful for debugging, because it allows you to focus on just one failing test at a time.Use console.log or util.inspect for print debugging. Many types, including
MapandSet, have very unhelpfultoString()methods. This means if you try to debug using, say,console.log("gradient is " + gradientMap), you will likely see something unhelpful like “gradient is [object Map]”. There are two ways to make this better:Use multiple arguments to
console.log()instead of relying on+andtoString():console.log("gradient is", gradientMap);This means that
console.log()takes responsibility for displaying thegradientMapobject, and it does a much better job of showing you what’s inside it.Use
util.inspect()to turn the object into a string:import util from 'node:util'; console.log("gradient is " + util.inspect(gradientMap));
Add/commit/push frequently. Whenever you do a nontrivial amount of work – e.g. after writing a function body and seeing that the tests pass, or after finding a bug, adding a regression test for it, and fixing it – you should add/commit/push your work with git.
After you’ve implemented all the functions, you can use npm start to run main.ts, which lists the functions exported by your toolbox and then calls handoutExampleOne and handoutExampleTwo.
The main.ts file is not part of the spec of the problem set, is not used for grading, and you are free to edit it as you wish.
Python notebook
You can investigate RBG & HSL color and Bézier curves in Python by opening the Problem Set 1 Workbook Jupyter notebook on Google Colab.
To modify and run the notebook on Colab, you must be signed in.
- You can also view and download the notebook on GitHub. If you download the file you can open and run it locally in VS Code, which has built-in Jupyter notebook support but may require setup on your machine.
The notebook can help you visualize colors and curves by using Python’s well-tested libraries, check your work, and create data for test cases.
Designing your toolbox of animation functions
If you identify a computation that both of the examples must perform, that is a natural candidate for abstracting into a toolbox function.
However, the two example animations are quite different. So you might instead identify a computation that you expect will be a frequent part of creating similar animations, and build a toolbox function for that — even if you only use the function to implement one of the two required examples.
You are may find that the required specs in colors.ts (which you cannot change) and curves.ts (which you can only strengthen) are not always a perfect fit for building these animations.
How you proceed is up to you.
Keep your code DRY and your specifications clear.
Keep your toolbox small, at most three new functions.
You would like handoutExampleOne and -Two to be short, but they may still have work to do, too, and they may directly use the other functions you have implemented.
If you have an existing function in utils.ts that would be useful in the toolbox, import it into and then re-export it from toolbox.ts so that it becomes part of that module’s spec.
Do not duplicate the test suite, keep it in utils.test.ts.
Your specs should be safe from bugs, easy to understand, and ready for change. Use static typing where possible.
When your specs have preconditions that cannot be statically checked, your implementations should check the preconditions and fail fast if the precondition is not satisfied.
Submitting
Make sure you commit AND push your work to your repository on github.mit.edu.
We will use the state of your repository on github.mit.edu as of 10:00pm on the deadline date.
When you git push, the continuous build system attempts to compile your code and run the public tests (which are only a subset of the autograder tests).
You can always review your build results at didit.mit.edu/6.102/sp25.
Didit feedback is provided on a best-effort basis:
- There is no guarantee that Didit tests will run within any particular timeframe, or at all. If you push code close to the deadline, the large number of submissions will slow the turnaround time before your code is examined.
- If you commit and push right before the deadline, it’s okay if the Didit build finishes after the deadline, because the commit-and-push was on time.
- Passing some or all of the public tests on Didit is no guarantee that you will pass the full battery of autograding tests — but failing them is almost sure to mean lost points on the problem set.
Grading
Your overall ps1 grade will be computed as approximately:
~40% alpha autograde (including online exercises) + ~10% alpha manual grade + ~35% beta autograde + ~15% beta manual grade
The autograder test cases will not change from alpha to beta, but their point values will. In order to encourage test-first programming, alpha autograding will put more weight on your tests and less weight on your implementations. On the beta, autograding will look for both good testing and good implementations. Test cases may be worth zero points on the alpha and nonzero points on the beta, or vice versa.
Manual grading of the alpha may examine any part of your solution, including specs, explanations we requested, test suites, implementations, and your Git commit history. Manual grading of the beta may examine any part, and how you addressed manual grading and code review feedback from the alpha.