6.102
6.102 — Software Construction
Spring 2025

Problem Set 1: Specific Graphics

The deadlines for this problem set are shown on the course calendar.

The purpose of this problem set is to give you practice with test-first programming and specifications. Given a set of specifications, you will write unit tests that check for compliance with the specifications, and then implement code that meets the specifications. You will also strengthen the specs of some functions, write your own specs for other functions, and test and implement those specs as well.

More progress with Git

Read and do the exercises in Git 2: Disaster Recovery.

Get the code

To get started,

  1. Ask Didit to create a remote psets/ps1 repository for you on github.mit.edu.

  2. Clone the repo. Find the git clone command at the top of your Didit assignment page, copy it entirely, paste it into your terminal, and run it.

  3. Run npm install, then open in Visual Studio Code. See Problem Set 0 for if you need a refresher on how to create, clone, or set up your repository.

Overview

Before we dive in: don’t get stuck on any of the new concepts or mathematical details as you read this overview. You’ll come back to them as you work through the problem set.

In this problem set, you will specify, test, and implement functions for working with two kinds of graphics data. First colors, including:

interpolating colors

generating gradients

generating palettes

In RGB color, this green is two-thirds of the way from sky blue to bright amber.

A gradient from light teal to dark indigo with four intermediate RGB color steps.

Starting from deep pink, we find three more colors evenly spaced around the color wheel using HSL color.

And then Bézier curves (a kind of parametric curve) including:

finding one point on a curve

finding a set of points on a curve

On the curve with control points:
(0, 9), (10,-6), (20,16), (30,0)
… the point one-third of the way* along the curve is approximately (10, 3.56).

On the curve with control points:
(0, 9), (50,9), (0,0), (30,1)
… the 10 marked points are quadratically spaced* along the curve.

* “one-third of the way” and “quadratically spaced” are w.r.t. the curve parameter, not arc length

To begin the problem set, you will start with simple linear interpolation in 1 dimension, in 3-dimensional color space, and in 2-dimensional Euclidean space. Along the way, you will also add the idea of an easing function that controls the selection of where to interpolate. For example:

animating points

animating with easing

On the parametric curve:
x = t, y = t
…which draws a diagonal segment, mark the points at:
t = [0, .25, .5, .75, 1]

Animating those points evenly over time will create a simple linear motion.*

On the same curve, apply an easing function:
easing(tin) = 2·tin2
… and mark:
tout=[0, .125, .5, 1.125, 2]

Animating those points evenly over time creates accelerating or decelerating motion.*

* Both of these animations use many more points to create a convincing illusion.

In animating with easing above, we’re interpolating points on the curve outside the range t ∈ [0, 1]. We won’t do that for Bézier curves, but it works in other contexts, such as this very simple curve.

By the end of the problem set, you will build up a set of functions that you will use to replicate the animations below… and perhaps create new animations of your own!

example #1: alley-oop

example #2: rainbow connection

Click to play and pause each animation.

Show how alley-oop is constructed

In this looping one-second animation…

The circle shape is two Bézier curves with control points:
(10, 10), (10, 17), (20, 17), (20, 10) and
(10, 10), (10, 3), (20, 3), (20, 10)

It is translated by the curve with control points:
(0, 0), (40, 110), (80, 0)
… with the easing function:
4·(t - ½)3 + ½
… so that it slows down then speeds up again.

Its RGB color transitions linearly from (128,128,128) to (173,255,47).

Show how rainbow connection is constructed

In this looping two-second animation…

The heart shape is two Bézier curves with control points:
(50, 25), (34, 42), (2, 36), (0, 0), (50, -40) and
(50, -40), (100, 0), (98, 36), (66, 42), (50, 25)

Each curve is revealed over time according to an easing function that is the 1-dimensional Bézier curve with control points:
(.25), (1), (-1), (1), (1), (1), (1)
… so that each curve starts drawn up to t=.25, makes some progress, bounces back, then goes to t=1.

And the color of each curve is a rainbow gradient around the color wheel that starts at RGB (39,229,220).

Click to open the full specification.

Steps

Since we are doing iterative test-first programming, your workflow for each function should follow the order specs → tests → implementation, iterating as needed. Make steady progress across functions and down the workflow, in the following pattern:

  1. Read this entire handout first.

  1. Read carefully the provided specs for functions you must implement. The specs use type aliases to give names to some data structures we don’t want to re-spell every place they are used: Color and Point.

  2. Problems 2(a—c) require stronger vs. weaker specs from reading 5.

    To continue working if you have not yet done reading 5, skip to: 2(d) writing tests for lerpColor, 3(a) tests for make­Gradient, and 4(a) tests for make­Palette.

    Then circle back before implementing those functions and start with specs, tests, and code for lerpWeak.

    For lerp and lerpColor

    1. In src/lerp.ts, write a weak spec for lerpWeak(..) that makes it understandable, appropriate for implementing lerpColor(..), and as weak as possible. Useful for lerpColor, but otherwise unnecessarily weak!

    2. Test your spec for lerpWeak in the first section of test/lerp.test.ts. This is a very simple function: your test suite should be very small. Add/commit/push.

    3. Implement lerpWeak. Review your work: the spec should be uncomfortably weak, but the quality of documentation, tests, and code should be excellent. Add/commit/push.

    4. Test lerpColor in the first section of test/colors.test.ts. You may not change its spec, and these tests will be run against other implementations of lerpColor that are not your own. This is a very simple function: your test suite should be very small. Add/commit/push.

    5. Notice that the lerp.ts module exports lerp using the lerpWeak spec and implementation. Implement lerpColor in src/colors.ts by calling lerp and assuming your weak spec. Add/commit/push.

    Didit check-in. If you review your Didit build results now, you should see two relevant public tests passing:

    • lerpColor test suite (running your tests against a broken staff implementation) … that always returns the first color — this test passes if, when Didit runs your tests against our broken code, at least one of your tests fails
    • lerpColor (running original test from pset starting code against your implementation) … should pass the example test — this test passes if your implementation satisfies the example test we provided in colors.test.ts that looks for “25% gray between black and 50% gray.”

    … but you should also see the other running your tests against a broken staff implementation tests passing, even though you haven’t worked on those test suites. That’s because those broken staff implementations are so broken that the example tests we provided find the bugs.

    There will be other, hidden tests of both your test suite and your implementation for lerpColor as described in submitting and grading.

  3. For makeGradient

    1. Write tests for makeGradient(..). You may not change its spec, and these tests will be run against other implementations of makeGradient that are not your own. Add/commit/push.

    2. Correction

      To remove ambiguity, makePalette should specify:

      @returns an array of exactly all the distinct RGB colors converted (and rounded, half up) from HSL colors that have: …

      And hslToRgb should specify:

      Convert from HSL to nearest (rounded, half up) RGB color.

      The specs are corrected in pset repos created after Fri Feb 14. If you created your repo earlier, please make these corrections in your files.

      Implement makeGradient. Add/commit/push.

  4. For makePalette

    1. Write tests for makePalette(..). You may not change its spec, and these tests will be run against other implementations of makePalette that are not your own. Add/commit/push.

      Note that we have provided hslToRgb and rgbToHsl conversion functions in utils.ts.

    2. Implement makePalette. Add/commit/push.

  5. For lerp (again) and interpolate

    1. At the top of the function body, write a one-sentence explanation: why can we not use lerp with your lerpWeak spec to implement interpolate(..)?

    2. In src/lerp.ts, write a strong spec for lerpStrong that makes it applicable in all the places you’ve used it so far, as well as for implementing interpolate. A useful spec!

    3. Analyze the relationship between your lerpWeak and lerpStrong specs in three sentences by answering the questions near the bottom of lerp.ts. It may be the case that your lerpWeak is weaker than lerpStrong… or it may not!

    4. Test your spec for lerpStrong in the last section of test/lerp.test.ts. This remains a very simple function: your test suite should be very small.

    5. Implement lerpStrong. Add/commit/push.

    6. Update the line export const lerp = lerpWeak so it instead uses lerpStrong.

    7. Write tests for interpolate in test/colors.test.ts. You may not change its spec, and these tests will be run against other implementations of interpolate that are not your own. Add/commit/push.

    8. Implement interpolate by calling lerp, which now guarantees your strong spec. Add/commit/push.

  6. For bezierInterpolate and bezierPath

    1. Strengthen the provided specs of bezierInterpolate(..) and bezierPath(..) in src/curves.ts, taking care to make them only stronger than what is provided. Then write tests for them in test/curves.test.ts:

      • bezierInterpolate must be stronger than what is provided, generally useful, and useful as a helper function to implement bezierPath. Rename renameMe and weaken its provided precondition; and determine the function’s postcondition.

      • The spec of bezierPath must be stronger than what is provided. Rename renameMe and weaken its provided precondition; and strengthen the function’s postcondition.

      • See additional advice below about strengthening the specs of bezierInterpolate and bezierPath.

      Add/commit/push at each specification and testing sub-step.

    2. Implement these functions. Add/commit/push.

  7. For creating animations

    1. Iterative development recommendation

      We hope this last part of the problem set is fun! But you should first iterate on all the previous problems, refining your specs, tests, and implementations, before starting problem 7.

      Alpha grading will emphasize work on those earlier problems.

      Review the spec of animateToFile(..), implemented in lib/animate.ts.

      It uses additional type aliases because spelling its input type is a mouthful:
      Array<Array<{color: [number,number,number], points: Array<{x: number, y: number}>}>>

    2. In src/toolbox.ts, draft the specifications for at most three functions that will make it easy for a client to create animations like the two examples above (alley-oop and rainbow connection) by creating data structures that can be passed to animateToFile.

    3. Write tests for your functions in test/toolbox.test.ts. Remember that the goal of these tests is to find bugs in these functions, not in other code. Add/commit/push.

    4. Implement your functions. Add/commit/push.

    5. Use your toolbox to implement handoutExample­One(..) and handoutExample­Two(..). You do not need to write tests for those two functions, and their implementations should be short because they can leverage your well-designed toolbox.

      Necessary numbers are provided in the code. Click “show how [animation] is constructed” under each figure to see all the details.

      As you work, you can use npm start to run main.ts, which will call handoutExample­One and -Two. Then open (or refresh) example1.html (or example2.html) in your browser. You can freely modify main.ts.

    Add/commit/push. Your Didit build results should now include links where you can see the example animations! If you’re pushing close to the deadline, keep in mind that Didit feedback is provided only on a best-effort basis.

The next few sections have more advice about these steps:

Use git add/git commit/git push after every step that changes your code. Committing frequently – whenever you’ve written some tests, fixed a bug, or added a new feature – is a good way to use version control, and will be a good habit to have for your team projects and as a software engineer in general, so start forming the habit now. Your git commit history for this problem set should:

  • demonstrate test-first programming;
  • have frequent small commits;
  • include short but descriptive commit messages.

What you can and can’t change

  • Don’t change any provided file or folder names.
  • Don’t change the function signatures: the functions in lerp.ts must use the function signatures that we provided.
  • Don’t change the function signatures and specifications: the exported functions in colors.ts must use the function signatures and the specifications that we provided.
  • Don’t weaken the specifications: the exported functions in curves.ts must have a stronger spec than we provided.
  • Don’t export anything new: from lerp.ts, colors.ts, and curves.ts, only the provided functions may be exported.
  • Don’t include illegal test cases: the tests you write must respect the specifications that you are testing. Tests against specs we provided will be run against other implementations of those specs.

Aside from these requirements, however, you are free to add new functions and new classes if you wish.

Specifications

Before you start, read the specs carefully, and take notes about what you observe. You can either read the TypeScript source files directly in VS Code, or read the TypeDoc documentation for all functions generated from the source files.

Keep in mind these facts about specifications:

  • Some specs have preconditions. Recall from the specs reading that when preconditions are violated by the client, the behavior of the function is completely unspecified.

  • Some specs have underdetermined postconditions. Recall that underdetermined postconditions allow a range of behavior. When you’re implementing such a function, the exact behavior of your function within that range is up to you to decide. When you’re writing a test case for the function, the test must allow for the full range of variation in the behavior of the implementation, because otherwise your test case is not a legal client of the spec as required above.

  • Exported functions can be used anywhere. These functions are independent modules, which might be called by various parts of a graphics system, not necessarily the code you write. A function implementation must be able to handle all inputs that satisfy the precondition, even if they don’t arise in your use of those functions. And a function may return any output that satisfies its postcondition, even if that doesn’t seem useful in the context you would like to call it.

In this problem set we are working with floating-point numbers, but specifying the numerical precision of calculations, and writing algorithms to implement a specified precision, are both outside the scope of 6.102. Therefore, in all specifications:

  • Integral number values are assumed to be safe, between MIN_SAFE_INTEGER and MAX_SAFE_INTEGER (and of course they may be constrained further).

  • When the computation of a non-integral number is constrained, error of up to tolerance = 0.001 is allowed. You may not use any other definition of floating-point equivalence for any of the functions we require you to specify or implement. If you are asked to strengthen or weaken a spec, you may not do so by changing the tolerance for floating-point error. You can use the provided assertApproxEqual function in utils.ts to write tests compatible with this specification.

  • Our goal with these assumptions is to simplify, not complexify, the problem set. If you think properly testing or implementing one of the required specs is difficult for reasons of numerical accuracy, or made more difficult by these assumptions, please ask a question.

Strengthening the specs of bezierInterpolate and bezierPath

To strengthen the spec of bezierInterpolate

  • consider weakening its statically-checked precondition on renameMe (as well as renaming that parameter)
  • avoid weakening its provided exceptional-case postcondition

To strengthen the spec of bezierPath

  • consider weakening its statically-checked precondition on renameMe by using a union type (as well as renaming that parameter)
  • then consider your postcondition when undefined is provided for that parameter, since you cannot add exceptional cases

Testing

You should partition each function’s inputs and outputs, write down your partitions in a testing strategy comment, and choose test cases to cover the partitions.

The function specs and implementations are in the files under src/, and the corresponding Mocha tests are in files under test/. Separating implementation code from test code is a common practice in development projects. It makes the implementation code easier to understand, uncluttered by tests, and easier to package up for release.

The test suite for a function may already have example tests in it, which are provided as models. You are recommended to read and then throw away those example tests and write your own.

Some advice about testing:

  • Your test cases should be chosen using partitioning. This approach is explained in the reading about testing.

  • Include a comment at the top of each test suite describing your testing strategy. Examples are shown in the reading about testing.

  • Your test cases should be small and well-chosen. Don’t use a large set of data for each test. Instead, create inputs carefully chosen to test the partition you’re trying to test.

  • Your tests should find bugs. We will grade your test cases in part by running them against buggy implementations and seeing if your tests catch the bugs. So consider ways an implementation might inadvertently fail to meet the spec, and choose tests that will expose those bugs.

  • Your tests must be legal clients of the spec. We will also run your test cases against legal, variant implementations that still strictly satisfy the specs, and your test cases should not complain for these good implementations. That means that your test cases can’t make extra assumptions that are only true for your own implementation.

  • Put each test case in its own it() function. This will be far more useful than a single large test function, since it pinpoints the problems in the implementation.

  • Run testing coverage. When it’s time to do glass box testing, run your test suites with npm run coverage.

  • Be careful calling helper functions from testing code. Your test cases in, e.g., test/colors.test.ts, must not call a new helper function that you have defined in src/colors.ts. Remember that your tests will be run against staff implementations of the colors.ts functions, and code in your version of that file will not be available. Put helper functions needed only by your testing code into test/; and put helper functions needed by both implementation and test code into src/utils.ts (discussed in the section below).

  • Again, keep your tests small. Don’t use unreasonable amounts of resources (such as arrays or strings of length MAX_SAFE_INTEGER). We won’t expect your test suite to catch bugs related to running out of resources; every program fails when it runs out of resources.

  • Use strictEqual and deepStrictEqual. For example. assert.equal(1, '1') passes because it uses the dangerous == comparison. Don’t use it, always use strictEqual for immutable built-in types.

    However, assert.strictEqual([1], [1]) fails because it checks that the two arguments refer to the same array instance. To compare data structures, use deepStrictEqual as shown in the provided example tests.

  • Use assertApproxEqual. As discussed in the section above, use assertApproxEqual to compare floating-point numbers when error is permitted.

  • Add/commit/push frequently. Whenever you do a nontrivial amount of work – e.g. after writing a testing strategy comment; after choosing some test inputs; after writing Mocha tests and seeing that they pass – you should add/commit/push your work with git.

Implementation

Implement each function, and revise your implementation and your tests until all your tests pass.

Some advice about implementation:

  • Small helper functions. If you want to write small helper functions for your implementation of a single module, then you can put them in the relevant implementation file alongside the other functions. Don’t export helper functions in lerp.ts, colors.ts, or curves.ts, because that changes the spec of those modules in a way that you are not allowed to do. This means you cannot write tests for these helper functions, which is why they must be small. You are relying on your test suite for the public functions of those modules to achieve coverage and find bugs in these small helper functions.

  • Larger helper functions. If you want to write helper functions of any complexity, then you should put them in the utils.ts file, and write Mocha tests for them in utils.test.ts. This is also the place to put helper functions that are needed by both implementation code and test code. In utils.ts, if you export a function called myHelper, then in colors.ts and utils.test.ts you can call the function as utils.myHelper(..).

    Do not ask us to tell you whether a helper function is small or large. If you are not sure, then the function is large enough that it should have TypeDoc documentation and its own tests. Put the helper function in utils.ts, give it a clear spec, and test it in utils.test.ts.

  • Don’t call testing code. Don’t put a helper function in the test/ folder if you need to call it from implementation code in src/. Testing code should be completely detachable from the implementation, so that the implementation can be packaged up separately for deployment. We also detach your testing code when we run staff tests against your implementation. Put helper functions needed by implementation code, or by both test and implementation code, into src/utils.ts.

  • Eliminate warnings. Revise your code to address all the yellow-underlined warnings shown in VS Code. These warnings should include both TypeScript compiler warnings and ESLint warnings, because ESLint is enabled for this problem set.

  • Check testing coverage. Do glass box testing and revise your tests until you have satisfactory code coverage.

  • Review your own code. Read your code critically with an eye to making it as SFB, ETU, and RFC as possible.

  • Test frequently. Rerun your tests whenever you make a change to your code.

  • Use Mocha’s -f argument to debug a single test. To run a subset of test cases from your whole test suite, use npm test -- -f 'pattern'. Only tests whose it() description contains the string pattern will be run. For example, in the provided starting code, the command npm test -- -f 'lerpColor covers...' runs only the example test for lerpColor. This is useful for debugging, because it allows you to focus on just one failing test at a time.

  • Use console.log or util.inspect for print debugging. Many types, including Map and Set, have very unhelpful toString() methods. This means if you try to debug using, say, console.log("gradient is " + gradientMap), you will likely see something unhelpful like “gradient is [object Map]”. There are two ways to make this better:

    • Use multiple arguments to console.log() instead of relying on + and toString():

      console.log("gradient is", gradientMap);

      This means that console.log() takes responsibility for displaying the gradientMap object, and it does a much better job of showing you what’s inside it.

    • Use util.inspect() to turn the object into a string:

      import util from 'node:util';
      console.log("gradient is " + util.inspect(gradientMap));
  • Add/commit/push frequently. Whenever you do a nontrivial amount of work – e.g. after writing a function body and seeing that the tests pass, or after finding a bug, adding a regression test for it, and fixing it – you should add/commit/push your work with git.

After you’ve implemented all the functions, you can use npm start to run main.ts, which lists the functions exported by your toolbox and then calls handoutExample­One and handoutExample­Two. The main.ts file is not part of the spec of the problem set, is not used for grading, and you are free to edit it as you wish.

Python notebook

You can investigate RBG & HSL color and Bézier curves in Python by opening the Problem Set 1 Workbook Jupyter notebook on Google Colab. To modify and run the notebook on Colab, you must be signed in.

  • You can also view and download the notebook on GitHub. If you download the file you can open and run it locally in VS Code, which has built-in Jupyter notebook support but may require setup on your machine.

The notebook can help you visualize colors and curves by using Python’s well-tested libraries, check your work, and create data for test cases.

Designing your toolbox of animation functions

If you identify a computation that both of the examples must perform, that is a natural candidate for abstracting into a toolbox function.

However, the two example animations are quite different. So you might instead identify a computation that you expect will be a frequent part of creating similar animations, and build a toolbox function for that — even if you only use the function to implement one of the two required examples.

You are may find that the required specs in colors.ts (which you cannot change) and curves.ts (which you can only strengthen) are not always a perfect fit for building these animations. How you proceed is up to you. Keep your code DRY and your specifications clear.

Keep your toolbox small, at most three new functions. You would like handoutExample­One and -Two to be short, but they may still have work to do, too, and they may directly use the other functions you have implemented.

If you have an existing function in utils.ts that would be useful in the toolbox, import it into and then re-export it from toolbox.ts so that it becomes part of that module’s spec. Do not duplicate the test suite, keep it in utils.test.ts.

Your specs should be safe from bugs, easy to understand, and ready for change. Use static typing where possible.

When your specs have preconditions that cannot be statically checked, your implementations should check the preconditions and fail fast if the precondition is not satisfied.

Submitting

Make sure you commit AND push your work to your repository on github.mit.edu. We will use the state of your repository on github.mit.edu as of 10:00pm on the deadline date. When you git push, the continuous build system attempts to compile your code and run the public tests (which are only a subset of the autograder tests). You can always review your build results at didit.mit.edu/6.102/sp25.

Didit feedback is provided on a best-effort basis:

  • There is no guarantee that Didit tests will run within any particular timeframe, or at all. If you push code close to the deadline, the large number of submissions will slow the turnaround time before your code is examined.
  • If you commit and push right before the deadline, it’s okay if the Didit build finishes after the deadline, because the commit-and-push was on time.
  • Passing some or all of the public tests on Didit is no guarantee that you will pass the full battery of autograding tests — but failing them is almost sure to mean lost points on the problem set.

Grading

Your overall ps1 grade will be computed as approximately:
~40% alpha autograde (including online exercises) + ~10% alpha manual grade + ~35% beta autograde + ~15% beta manual grade

The autograder test cases will not change from alpha to beta, but their point values will. In order to encourage test-first programming, alpha autograding will put more weight on your tests and less weight on your implementations. On the beta, autograding will look for both good testing and good implementations. Test cases may be worth zero points on the alpha and nonzero points on the beta, or vice versa.

Manual grading of the alpha may examine any part of your solution, including specs, explanations we requested, test suites, implementations, and your Git commit history. Manual grading of the beta may examine any part, and how you addressed manual grading and code review feedback from the alpha.