Problem Set 1: Flashcards
- Alpha due
- Tuesday, February 22, 2022, 10:00 pm
Because this deadline is extended due to the student holiday on February 21, you may take at most 1 slack day on PS1 alpha. - Code reviews due
- Friday, February 25, 2022, 11:00 am
- Beta due
- Monday, February 28, 2022, 10:00 pm
The purpose of this problem set is to give you practice with test-first programming and specifications. Given a set of specifications, you will write unit tests that check for compliance with the specifications, and then implement code that meets the specifications. You will also write a stronger version of one of the specifications, and test and implement your stronger spec.
Get the code
Ask Didit to create a remote
psets/ps1
repository for you on github.mit.edu.Clone the repo. Find the
git clone
command at the top of your Didit assignment page, copy it entirely, paste it into your terminal, and run it.Run
npm install
, then open in Visual Studio Code. See Problem Set 0 for if you need a refresher on how to create, clone, or set up your repository.
Overview
Have you ever used flashcards – maybe to improve your vocabulary either in your native language or in a foreign language you want to learn? A flashcard has two sides, a front and a back, which might be a vocabulary word and its meaning. To use a flashcard for learning practice, you look at the front and try to recall what’s on the back, then look at the back to see if you were right. Flashcards are an example of active learning, which is much more effective than passive reading or listening.
A flashcard system is a scheduling algorithm that decides when to practice each flashcard for the most efficient learning. Flashcard systems take advantage of spaced repetition, the principle that we learn more durably when practice is spaced out over time rather than crammed together, and furthermore that the rate of forgetting something is slower the more we have practiced it, so that the spacing between practice can increase as a concept or skill becomes more learned.
This problem set uses a spaced repetition flashcard algorithm that we will call the Modified-Leitner system, because it is inspired by the Leitner system but differs in the details. In a Leitner system, flashcards are grouped into numbered learning buckets. Low-numbered buckets contain flashcards that are less well-learned, so low-numbered buckets are scheduled for practice more often than high-numbered buckets. As a flashcard receives more and more successful practice, it moves up to higher buckets, until finally reaching a retired bucket (e.g., bucket 5). Flashcards in the retired bucket are considered well-enough learned not to need practice anymore.
Here is the specific behavior of our Modified-Leitner algorithm:
- A new card, before its first practice, is assumed to be in bucket 0.
- On day n of the learning process, the user collects all the buckets that are scheduled for practice that day, and practices every card found in those buckets.
- Cards in the retired bucket are not scheduled for practice.
- Bucket 0 cards are practiced every day, bucket 1 cards every two days, bucket 2 cards every four days, and so forth. In general, bucket i is practiced every 2i days.
- The user is of course free to practice new cards, or unscheduled cards of their own choice, or to practice cards repeatedly.
- Whenever the user practices a flashcard:
- if they get the card wrong – they are unable to correctly recall what’s on the back of the card – then the card is put back in bucket 0.
- if they find the card easy – getting it right without much mental effort – then the card moves up from bucket i to bucket i+1 (unless the card is already in the retired bucket).
- if they find the card hard – getting it right but only with effort – then the card moves down to bucket i-1 (unless the card is already in bucket 0).
This problem set consists of a group of functions that can be used to implement the Modified-Leitner flashcard system:
toBucketSets()
converts aMap
representation of learning buckets into aArray
-of-Set
representationgetBucketRange()
finds the range of buckets that contain flashcards, as a rough measure of progresspractice()
selects cards to practice on a particular dayupdate()
updates a card’s bucket number after a practice trialdeduplicate()
consolidates flashcards by combining redundant cards
In addition to implementing these functions yourself, you will also write a test suite for each function, carefully following the function’s spec. Your test suites will be graded in part by running them against various staff implementations of the function, some of which obey the spec and some which don’t. A good test suite should pass the good implementations and reject the bad ones.
You are also asked to change exactly one of these function specs, deduplicate()
.
Its current spec is very weak, so you will strengthen the spec, and test and implement your stronger spec.
Unlike your test suites for other functions, your deduplicate()
tests won’t be run against good and bad staff implementations, because you’ll be changing the spec.
Steps
Since we are doing iterative test-first programming, your workflow for each function should follow the order specs → tests → implementation, iterating as needed. Make steady progress across functions and down the workflow, in the following pattern:
- Read this entire handout first.
- Read the specs for all functions carefully.
- For
toBucketSets()
andgetBucketRange()
:- Write tests for one function. Add/commit/push.
- Write tests for the other function. Add/commit/push.
- Implement one function. Add/commit/push.
- Implement the other function. Add/commit/push.
- For
practice()
andupdate()
:- Write tests for one function. Add/commit/push.
- Write tests for the other function. Add/commit/push.
- Implement one function. Add/commit/push.
- Implement the other function. Add/commit/push.
- For
deduplicate()
:- Write tests for the staff-provided weak spec. Add/commit/push.
- Code a trivial implementation of the weak spec, to exercise your tests. Add/commit/push.
- Strengthen the spec, following instructions later in this handout. Add/commit/push.
- Add tests for the revised spec. Add/commit/push.
- Implement the revised spec. Add/commit/push.
- Iterate as needed. Add/commit/push after each step.
toBucketSets |
getBucketRange |
practice |
update |
deduplicate |
...iterate... |
...iterate... |
|
---|---|---|---|---|---|---|---|
spec | 1 | 1 | 1 | 1 | 1 | 4c | 4f |
test | 2a | 2b | 3a | 3b | 4a | 4d | 4f |
implement | 2c | 2d | 3c | 3d | 4b | 4e | 4f |
The next few sections of this handout have more specific advice about these steps:
Use git add
/git commit
/git push
after every step that changes your code.
Committing frequently – whenever you’ve written some tests, fixed a bug, or added a new feature – is a good way to use version control, and will be a good habit to have for your team projects and as a software engineer in general, so start forming the habit now.
Your git commit history for this problem set should demonstrate test-first programming, have frequent small commits, and include short but descriptive commit messages.
What you can and can’t change
In order for your overall program to meet the specification of this problem set, you are required to keep some things unchanged:
- Don’t change this file at all: the file
flashcards.ts
should not be modified at all. - Don’t change these filenames: the files
algorithm.ts
andalgorithmTest.ts
must use those names and remain in the folders where they are. - Don’t change the function signatures and specifications: the exported functions
toBucketSets()
,getBucketRange()
,practice()
, andupdate()
must use the function signatures and the specifications that we provided. - Don’t change the function signature or weaken the specification: the function
deduplicate()
must have a stronger spec than we provided. - Don’t include illegal test cases: the tests you implement in
algorithmTest.ts
must respect the specifications that we provided for the functions you are testing.
Aside from these requirements, however, you are free to add new functions and new classes if you wish.
Specifications
Before you start, read the specs carefully, and take notes about what you observe. You can either read the TypeScript source files directly in VS Code, or read the TypeDoc documentation for all classes generated from the source files.
Keep in mind these facts about specifications:
Some specs have preconditions. Recall from the specs reading that when preconditions are violated by the client, the behavior of the function is completely unspecified.
Some specs have underdetermined postconditions. Recall that underdetermined postconditions allow a range of behavior. When you’re implementing such a function, the exact behavior of your function within that range is up to you to decide. When you’re writing a test case for the function, the test must allow for the full range of variation in the behavior of the implementation, because otherwise your test case is not a legal client of the spec as required above.
Exported functions can be used anywhere. These functions are independent modules, which might be called by various parts of a flashcard system, not necessarily just the Modified-Leitner algorithm. A function implementation must be able to handle all inputs that satisfy the precondition, even if they don’t arise in the Modified-Leitner algorithm. And it may return any output that satisfies its postcondition, even if that doesn’t seem useful in the Modified-Leitner algorithm.
Testing
You should partition each function’s inputs and outputs, write down your partitions in a testing strategy comment, and choose test cases to cover the partitions.
For each function, you’ll find its spec and implementation in a TypeScript file in the src
folder, and a corresponding Mocha test class file in the test
folder.
Separating implementation code from test code is a common practice in development projects.
It makes the implementation code easier to understand, uncluttered by tests, and easier to package up for release.
The Test class for a function may already have example tests in it, which are provided as models. You are recommended to read and then throw away those example tests and write your own.
Your test cases should be chosen using partitioning. This approach is explained in the reading about testing.
Include a comment at the top of each test suite class describing your testing strategy. Examples are shown in the reading about testing.
Your test cases should be small and well-chosen. Don’t use a large set of data for each test. Instead, create inputs carefully chosen to test the partition you’re trying to test.
Your tests should find bugs. We will grade your test cases in part by running them against buggy implementations and seeing if your tests catch the bugs. So consider ways an implementation might inadvertently fail to meet the spec, and choose tests that will expose those bugs.
Your tests must be legal clients of the spec. We will also run your test cases against legal, variant implementations that still strictly satisfy the specs, and your test cases should not complain for these good implementations. That means that your test cases can’t make extra assumptions that are only true for your own implementation.
Put each test case in its own
it()
function. This will be far more useful than a single large test function, since it pinpoints the problems in the implementation.Run testing coverage. When it’s time to do glass box testing, run your test suites with
npm run coverage
.Be careful calling helper functions from testing code. Your test cases in
algorithmTest.ts
must not call a new helper function that you have defined inalgorithm.ts
or some othersrc/
class. We detach your testing code from the rest of your solution when we run your test suites against staff implementations, so your code insrc/
will not be available to your test suites. Put helper functions needed by testing code intotest/
.Again, keep your tests small. Don’t use unreasonable amounts of resources (such as arrays or strings of length
MAX_SAFE_INTEGER
). We won’t expect your test suite to catch bugs related to running out of resources; every program fails when it runs out of resources.Use
strictEqual
anddeepStrictEqual
. For example.passes because it uses the dangerousassert.equal(1, '1')
==
comparison. Don’t use it, always usestrictEqual
for immutable built-in types.However,
assert.strictEqual([1], [1])
fails because it checks that the two arguments refer to the same array instance. To compare data structures, usedeepStrictEqual
as shown in the provided example tests.Add/commit/push frequently. Whenever you do a nontrivial amount of work – e.g. after writing a testing strategy comment; after choosing some test inputs; after writing Mocha tests and seeing that they pass – you should add/commit/push your work with git.
Implementation
Implement each function, and revise your implementation and your tests until all your tests pass.
Some advice about implementation:
Helper functions. If you want to write helper functions for your implementation, then you can put them in the
algorithm.ts
file, alongside the other functions.Don’t call testing code. Don’t put a helper function in the
test/
folder if you need to call it from implementation code insrc/
. Testing code should be completely detachable from the implementation, so that the implementation can be packaged up separately for deployment. We also detach your testing code when we run staff tests against your implementation. Put helper functions needed by implementation code intosrc/
.Eliminate warnings. Revise your code to address all the yellow-underlined warnings shown in VS Code. These warnings should include both TypeScript compiler warnings and ESLint warnings, because ESLint is enabled for this problem set.
Check testing coverage. Do glass box testing and revise your tests until you have satisfactory code coverage.
Review your own code. Read your code critically with an eye to making it as SFB, ETU, and RFC as possible.
Test frequently. Rerun your tests whenever you make a change to your code.
Use Mocha’s
-f
argument to debug a single test. To run a subset of test cases from your whole test suite, usenpm test -- -f 'pattern'
. Only tests whoseit()
description contains the stringpattern
will be run. For example, for the starting test suite, the commandnpm test -- -f 'different buckets'
runs only the test whose description is'covers two cards in different buckets'
. This is useful for debugging, because it allows you to focus on just one failing test at a time.Use console.log or util.inspect for print debugging. Many types, including
Map
andSet
, have very unhelpfultoString()
methods. This means if you try to debug using, say,console.log("buckets are " + bucketMap)
, you will likely see something unhelpful like “buckets are [object Map]”. There are two ways to make this better:Use multiple arguments to
console.log()
instead of relying on+
andtoString()
:console.log("buckets are", bucketMap);
This means that
console.log()
takes responsibilty for displaying thebucketMap
object, and it does a much better job of showing you what’s inside it.Use
util.inspect()
to turn the object into a string:import util from 'util'; console.log("buckets are " + util.inspect(bucketMap));
Add/commit/push frequently. Whenever you do a nontrivial amount of work – e.g. after writing a function body and seeing that the tests pass, or after finding a bug, adding a regression test for it, and fixing it – you should add/commit/push your work with git.
After you’ve implemented all the functions, you can use npm start
to run main.ts
, which uses the functions in a simulated flashcard application.
The main.ts
file is not part of the spec of this problem set, is not used for grading, and you are free to edit it as you wish.
Strengthening deduplicate()
Almost all the specs in this problem set should be used exactly as-is, with one exception.
The deduplicate()
function currently has a rather weak spec.
You should at first use this weak spec to write some tests and a simple implementation.
Make sure your tests respect the weak spec, and that your implementation passes your tests.
Add/commit/push.
Then revise the spec of deduplicate()
by editing its TypeDoc comment in algorithm.ts
.
Your new spec should be:
- stronger than the original spec
- deterministic
- helpful, in the sense of helping a user in realistic situations
- extending to a new learning domain
When revising the spec, you will need to carefully consider the meaning of two words in the spec:
Redundant. Your stronger spec will have to define what “redundant” means. At a minimum, your revised spec should require:
- cards with identical fronts are redundant, as in the original weak spec
- plus one new nontrivial redundancy rule of your choice for a learning domain that is not the English language
Information. Your spec should also define what card is returned when the two cards are redundant, in a clear, deterministic, and helpful way, which is stronger than the original weak spec. Thinking about the returned card will involve thinking about the “information” content of cards, and how to combine information deterministically.
Keep in mind that you are strengthening the spec, so deduplicate
can still be called in all the situations it could be used in the original weak spec.
This is a design problem, and you will need to think broadly and carefully. In the criteria given for your spec above, “stronger” and “deterministic” should be clear (see the Topics and Terms if you need to refresh their meanings), but “helpful” is likely to involve tradeoffs and design decisions. There is no single right answer, but there are better and worse decisions.
After you change your spec for deduplicate
, revise your test suite in algorithmTest.ts
and your implementation in algorithm.ts
so that the implementation satisfies your stronger spec and passes your updated tests.
Some advice about this problem:
Work iteratively. Bite off the requirements one bit at a time. First handle identical card fronts – update the spec, write tests, then implement. Then for your new learning domain: spec, test, implement.
Keep your original tests if possible. Each time you change the spec, revise your test suite to test your new spec by updating the testing strategy and test cases in
algorithmTest.ts
. Keep the strategy and tests you wrote for the original weak staff-provided spec, so that your test suite is checking your implementation for compliance with the original weak spec as well as your stronger spec.Revise specs and tests as needed. If you find while you are testing or implementing that you need to revise your spec, do so.
Add/commit/push frequently. If you commit after every step of this process (spec, commit, test, commit, implement, commit, etc.), then Git will keep track of your old versions of spec, tests, and code, so that you can refer back to them or even recover them if you have to. You don’t have to preserve your implementation of the original staff-provided weak spec in your final code, because your Git history should have it.
Submitting
Make sure you commit AND push your work to your repository on github.mit.edu.
We will use the state of your repository on github.mit.edu as of 10:00pm on the deadline date.
When you git push
, the continuous build system attempts to compile your code and run the public tests (which are only a subset of the autograder tests).
You can always review your build results at didit.mit.edu/6.031/sp22.
Didit feedback is provided on a best-effort basis:
- There is no guarantee that Didit tests will run within any particular timeframe, or at all. If you push code close to the deadline, the large number of submissions will slow the turnaround time before your code is examined.
- If you commit and push right before the deadline, the Didit build does not have to complete in order for that commit to be graded.
- Passing some or all of the public tests on Didit is no guarantee that you will pass the full battery of autograding tests — but failing them is almost sure to mean lost points on the problem set.
Grading
Your overall ps1 grade will be computed as approximately:
~40% alpha autograde + ~10% alpha manual grade + ~35% beta autograde + ~15% beta manual grade
The autograder test cases will not change from alpha to beta, but their point values will. In order to encourage test-first programming, alpha autograding will put more weight on your tests and less weight on your implementations. On the beta, autograding will look for both good testing and good implementations.
Manual grading of the alpha will examine your test suites, your revised deduplicate()
spec, and your git commit history.
Manual grading of the beta may examine any part, including re-examining the test suites and spec, and how you addressed code review feedback.