Direct FJP Grading

December 9, 2017

By Nirmaldasan

(nirmaldasan@hotmail.com)

James N. Farr, James J. Jenkins, and Donald G. Paterson suggested a New Reading Ease Index in their article of 1951 titled ‘Simplification of Flesch Reading Ease Formula’. They replaced the syllabic count in the Flesch formula with a monosyllabic count. This irked Rudolph Flesch himself and the readability expert George Klare. The creators of the new formula responded to the criticism and produced fresh data to show that both the formulae yielded ‘substantially equivalent results’.

Since the number of monosyllables is fewer than the number of syllables in any passage, the Farr-Jenkins-Paterson (FJP) formula is a fine simplification enjoying a high correlation of 0.93 with the Flesch formula.

This is how it works. Take a sample of 100 words from the passage to be tested for readability. Count the number of monosyllabic words (M). Also calculate the average words per sentence (AWS). Substitute the values in the formula:

FJP Reading Ease Index = 1.599*M – 1.015*AWS – 31.517

The formula yields a score which may be converted to Grade Levels by looking up a conversion table – the same that is used for the Flesch formula. Since the scoring system is the same, it becomes easy to compare the old and the new formulae. The authors tested the formula and found ‘perfect agreement for 237 of the 360 paragraphs’ with the Flesch formula. They say: “There is a disagreement of only one step for 119 paragraphs,” and add: “In only four instances is there a disagreement of two steps (in one instance the old index was ‘Fairly Easy’ and the new was ‘Fairly Difficult’, and in the other three instances the old index was ‘Standard’ and the new index was ‘Difficult’).”

This formula, in spite of all its decimal points, is not as intimidating as the Flesch formula. However, the use of the conversion table along with the formula is certainly a trouble that needs to be eliminated with little expense to accuracy.

Readability critics may say counting monosyllables is ‘baby talk’ or ‘primer style’. What will they say about counting non-monosyllables? Surely, they have to agree that this is neither ‘baby talk’ nor ‘primer style’. Then show them this exact equation: M (monosyllables) + N (non-monosyllables) = W (words). That may silence them.

But for those who wish to use the FJP formula without the conversion table, here is my simplification called Direct FJP Grading = 0.2*AWS + 0.3*N – 4.

AWS is the average words per sentence and N is the number of non-monosyllabic words in a passage of 100 words.

Advertisements

Reviewing The Strain Index

November 15, 2017

By Nirmaldasan

(nirmaldasan@hotmail.com)

I created in 2005 the strain index, a readability formula that grades texts on a scale of 1 to 17+ years of schooling. I first wrote a short article about it and later, on the readability expert William DuBay’s advice, tested the formula on graded passages. In 2007, I received M.Phil from the Madurai Kamaraj University for my research (under the guidance of Dr. Nirmal Selvamony) titled ‘A Quantitative Analysis of Media Language’ in which I had demonstrated the validity and the application of the strain index. Subsequently, I created this weblog Readability Monitor to promote the formula.

Ten years later, in October 2017, Lambert Academic Publishing published my dissertation. So it is time to review the strain index. I do not know how many people use the formula. I have done my best to promote the strain index not only in my writings but also in my classes. I wrote ‘The Strain Index: A New Readability Formula’ for Journalism Online, and The Hoot accepted my humble request to reproduce the article on its website. I later wrote ‘Longer The Sentence, Greater The Strain’ for Vidura, a journal of the Press Institute of India. These and other articles about the strain index are all available in this weblog Readability Monitor.

So how do I persuade people to buy my book? The blurb says: “A Quantitative Analysis Of Media Language offers an alternative readability formula called Strain Index to the most popular Fog Index of Robert Gunning. Both the formulas were compared by testing them on graded English textbooks. The Strain Index enjoys a very high correlation of 0.97 with the Fog Index. The advantage of the Strain Index is that it uses only one variable instead of two employed by the Fog Index. The readability expert William DuBay called the Strain Index remarkably simple.”

For those who just want to know what the formula is and how to use it, there is obviously no need to buy. But those who are into readability research – big names like Stylewriter and Lexile – may have the insatiable curiosity to find out what I have done with my formula and what the formula does. University libraries may also find my book a welcome addition. I would especially request scholars who have substantial research funds to buy this little book of mine and make me richer by a few Euros.

How To Grade Words?

October 31, 2016

By Nirmaldasan

(nirmaldasan@hotmail.com)

Edgar Dale and Joseph O’Rourke in ‘Living Word Vocabulary’ (LWV) graded thousands of words using what I would like to call the Graded Survey Method. According to this method, the grade of a word is the lowest grade in which at least 67% of the students found it familiar. In the ‘Plain English Lexicon’ Martin Cutts writes about the LWV: “It covered some 44,000 word meanings and involved 320,000 students. For each word, roughly 200 students were tested using a 3-choice multiple-choice test.”

The work is useful and impressive. But I was intimidated by the amount of labour involved. “There must be a shortcut,” I thought, and found one too.

The grading of words involves two simple steps: 1. Identifying the given word as familiar or unfamiliar; and 2. Counting syllables of a familiar word or counting letters of an unfamiliar word.

The first step can be easily accomplished by using any one of the following methods:

  • Group Method: Present the word to a group of five persons. The word is considered familiar if four out of five think so. This method may also be called the 80% method.
  • Martin Cutts Method: Look at the frequency of the word in the British National Corpus. “To give a very rough guide, I judge that words scoring more than about 1,200 are fairly common,” says Martin Cutts in the ‘Plain English Lexicon’. If the word does not occur in the corpus or if its frequency is less than 1200, then the word is considered unfamiliar.
  • List Method: A word is considered familiar if it occurs in any list of familiar words. We may use Edgar Dale’s List of 3000 familiar words or Kev Nair’s List of Maximum General Utility Words (2788).
  • Media Method: A word is considered familiar if it is frequently heard on radio and television or frequently found in newspapers and magazines.
  • Subjective Method:  If the word is familiar to me, then it is assumed that the word is familiar to others too. A better version of this method is that if I think the word is familiar to all, then it must be so.

The second step takes no time and little effort. Here we go!

  • The Grade Level of Familiar Word (GLFW) = S (number of syllables of the word)
  • The Grade Level of Unfamiliar Word (GLUW) = L (number of letters of the word)

NOTE: The Grade Level is the number of years of schooling required to understand a text. Usually, the scale is 1 to 17+ years of schooling.

The Lemma Readability Index

January 17, 2016

By Nirmaldasan

(nirmaldasan@hotmail.com)

The Dale-Chall readability formula uses a list of 3000 familiar words. This formula has a very high correlation with text difficulty. However, readability formulae that do not use a list such as Robert Gunning’s Fog Index are more popular as they are easy to apply. But there is no reason to discard the list as it tests each word of a text. Let us look at a shorter list of 100 commonest words, which typically covers 50% of the over two billion words in the Oxford English Corpus. This list in rank order is found in an article titled ‘The OEC Facts About The Language’: http://www.oxforddictionaries.com/words/the-oec-facts-about-the-language

The list uses the idea of lemmas, ‘a lemma being the base form of a word’.  An alphabetical arrangement of the words would help us use the list for measuring readability.

Commonest Lemma List

a  about  after  all  also  an  and  any  as  at  (10 lemmas)

back  be  because  but  by  (5 lemmas)

can  come  could  (3 lemmas)

day  do  (2 lemmas)

even  (1 lemma)

first  for  from  (3 lemmas)

get  give  go  good  (4 lemmas)

have  he  her  him  his  how  (6 lemmas)

I  if  in  into  it  its  (6 lemmas)

just  (1 lemma)

know  (1 lemma)

like  look  (2 lemmas)

make  me  most  my  (4 lemmas)

new  no  not  now  (4 lemmas)

of  on  one  only  or  other  our  out  over  (9 lemmas)

people (1 lemma)

say  see  she  so  some  (5 lemmas)

take  than  that  the  their  them  then  there  these  they  think  this  time  to  two  (15 lemmas)

up  us  use (3 lemmas)

want  way  we  well  what  when which  who  will with  work  would  (12 lemmas)

year  you  your (3 lemmas)

New Formula

The Lemma Readability Index (LRI) measures texts on a scale of 1 to 17 years of schooling. The LRI is the number of words per sentence not in the Commonest Lemma List. Take a sample of n sentences from a text. Count the Words Not in List (WNL). Then, LRI = WNL/n.

Counting Guidelines

  1. Do not count proper names (names of people, places, days, months, organisations … )
  2. Do not count numerals, symbols, abbreviations, acronyms
  3. Do not count lemmas that are in the list
  4. Do not count words that are grammatically associated with the lemmas in the list. Some examples:
  5. Since be is in the list, do not count being, am, are, is, was, were
  6. Since take is in the list, do not count taken, taker, takers, takes, taking, took
  7. Since new is in the list, do not count newer, newest, newly, news, newsy
  8. Since time is in the list, do not count timed, timely, timer, times, time’s, timing
  9. Do not count compound words, if each part is in the list. Some examples:
  10. Since some and how are in the list, do not count somehow
  11. Since any and way are in the list, do not count anyway
  12. Since an and other are in the list, do not count another
  13. Since good and will are in the list, do not count goodwill
  14. Count compound words as many times as they appear even if one part is not in the list. Some examples:
  15. Since how is in the list but ever is not, count however
  16. Since will is in the list but free is not, count freewill
  17. Count every single word (even repetitions) which is neither in the list nor grammatically associated with the lemmas in the list

These guidelines solve most of the counting problems. But one is likely to come across a number of deceptive words. For instance, a and do are in the list, therefore ado begs not to be counted (fifth guideline). However, ado has to be counted because it is not a compound word. Again, take the words better and more. Though both are not in the list, one is tempted to exclude them from the count because of semantic reasons. Better is related to good, and more is related to some and most. Resist the temptation and count every deceptive word.  Remember that if we do not count more, then we cannot also count moreover. Let’s not quibble.

Application

Let us apply the formula on the following paragraph:

“The first batch of students of the Certificate in Online Journalism programme announced that they are online at the viva voce examination on Saturday (29 November 2014). They have created for themselves a website, a blog and a twitter account.”

Let us follow the counting guidelines.

  1. Proper names are not counted (Saturday, November)
  2. Numerals are not counted (29, 2014)
  3. Lemmas in the list are not counted (The, first, of, in, that, they, at, on, have, for, a, and)
  4. Words grammatically associated with the lemmas are not counted (are)
  5. Compound words, if each part is in the list, is not counted (— )
  6. Compound words even if one part is not in the list is counted (Online, online, themselves, website)
  7. Every single word which is neither in the list nor grammatically associated with the lemmas in the list is counted (batch, students, Certificate, Journalism, programme, announced, viva, voce, examination, created, blog, twitter, account)

In the order of appearance, here is the list of words not in the list: batch, students, Certificate, Online, Journalism, programme, announced, online, viva, voce, examination, created, themselves, website, blog, twitter, account. WNL = 17.

Since the number of sentences in the sample is 2, LRI = WNL/n = 17/2 = 8.5 years of schooling.

Comparison

Let us compare the LRI with the Fog Index (FI).

Average Words per Sentence (AWS) = 40/2 = 20

Percentage of hard words (P) = (1/40)*100 = 2.5 [Not all polysyllables are hard. In this example, Certificate and Journalism are not counted as hard because they are part of the name of a programme. The only hard word is examination]

FI = 0.4*(AWS+P) = 0.4*(20+2.5) = 0.4*22.5 = 9 years of schooling.

The LRI compares very well with the FI. One needs to test the validity of LRI on at least a 100 samples. Please go ahead and put the LRI to the test. Thank you.

 

Related Articles

Direct  Dale-Chall Grading: https://strainindex.wordpress.com/2008/03/10/direct-dale-chall-grading/

Plain Fog Index: https://strainindex.wordpress.com/2010/05/11/the-plain-fog-index/

Readability Conjectures: https://strainindex.wordpress.com/2008/05/16/readability-conjectures/

 

 

Speakability: The EMLU Formula

December 18, 2015

By Nirmaldasan

(nirmaldasan@hotmail.com)

Speakability is the child’s skill in producing meaningful utterances. Words of an utterance may be divided into prefixes, roots and suffixes – the smallest units of meaning called morphemes. The Mean Length of Utterance is the total number of morphemes divided by the total number of utterances. Usually, a sample of 100 utterances is taken for calculating the MLU. Graham Williamson’s Mean Length Of Utterance: http://www.sltinfo.com/mean-length-of-utterance/ is a very fine and comprehensive article on the subject.

The Expected Mean Length of Utterance (EMLU) is a simple formula that can diminish parental anxiety about a child’s speakability. If M is the age of the child from Months 18 to 60, then EMLU = (M – 5) / 10. For example, if the child’s age is 25 months, then EMLU is 2 morphemes per utterance. Parents should understand that some children may be fast or slow in gaining speakability skills. The EMLU formula just gives a ballpark figure. Parents may be happy if their children produce more morphemes than the formula indicates, but should not worry if they produce less. Sooner or later, children are bound to pick up their native language.

 

The Simplicity Score Of Business Writing

October 27, 2014

By Nirmaldasan
(nirmaldasan@hotmail.com)

The average sentence length is arguably the best indicator of text difficulty. A writer who uses this yardstick has to divide the number of words by the number of sentences. If we choose a sample of 10 sentences, then the calculation becomes simpler. “Sentences in Time and Reader’s Digest vary considerably in length, but the average sentence length, issue after issue, is only about 17 words,” writes Robert Gunning in How To Take The Fog Out Of Writing.

If our writing measures up to this standard, then in 10 sentences there may be about 170 words. Too much of counting, you say? I have solved this problem with the help of a short sample of words, a count of complete sentences and a simple scoring system.

The Simplicity Score (SS) of a business text is the number of complete sentences in a sample of exactly 35 words. It is obvious that text simplicity increases with the number of complete sentences in the sample. The SS may vary on a five-point scale as follows: 0 (very hard), 1 (hard), 2 (standard), 3 (easy) and 4+ (very easy).

What’s the SS of the following paragraph from Gunning?

“But, while the Fog Index is handy for judging readability, it is not a formula for how to write. Don’t feel that you have written clearly just because your Fog Index is low. Anyone could put together a mumbo jumbo of short words in short sentences that would convey nothing at all to the reader.”

Let’s first draw an exact 35-word sample: “But, while the Fog Index is handy for judging readability, it is not a formula for how to write. Don’t feel that you have written clearly just because your Fog Index is low. Anyone could …”

The SS is 2 (standard).

All writers should do a bit of counting words and sentences and revise their writing for the sake of their readers. Before we send an article to the Press or a business proposal to a prospective customer, we should ask, “What’s the SS?”

Vocalic Readability Index

September 17, 2014

By Nirmaldasan

(nirmaldasan@hotmail.com)

Spotting vowels is easy; even a computer can do it. The vowels (a e i o u y) may not predict reading levels as reliably or as accurately as syllables can. But being closely associated with the syllables, vowels can measure text difficulty.

A syllable may have one or more vowels: by has one, tie has two, course has three and queue has four. In ‘The Vocalic Cloze Procedure’, I wrote: “The average syllable has three letters, of which two are usually consonants and one is a vowel.” I chanced upon a table of relative frequencies of alphabetic characters in Simon Singh’s The Code Book. H. Beker and F. Piper’s table had first appeared in Cipher Systems: The Protection Of Communication.

Based on a sample of 100,362 letters, the authors calculated the frequency of each letter of the alphabet. I summed the frequencies of only the vowels and obtained the figure 40.2%. This means that there are 1.2 vowels per syllable and 2 vowels per word.

That should suffice for us to derive the Vocalic Readability Index (VRI) = AVS / 4. The AVS is the average vowels per sentence, which is divided by 4 to match the text to the reading or grade level from 1 to 17+. The VRI can be easily derived from the W-Index or the S-Index or the L-Index; these indices of mine are discussed in another article titled ‘Seven Indices Of Readability’.

I tested the VRI on the 10 graded samples found in the appendix of Jeanne S. Chall and Edgar Dale’s Readability Revisited (the new Dale-Chall readability formula). The VRI predicts within two grade levels on all the tested samples; and within one grade level on 50 % of the samples. The VRI was able to predict exactly the reading level of the passage beginning ‘The controversy over the laser-armed satellite …’, which has a reading level 9-10. There were 189 vowels in 5 sentences. Therefore, the AVS is 189/5 = 37.8 and the VRI is 37.8/4 = 9.45.

To obtain a better estimate, let V25 be the number of vowels in 25 sentences. Then the VRI = V25 / 100. What is more, this formula can be easily computerised.

Basic Polyvowel Words

December 19, 2013

By Nirmaldasan

(nirmaldasan@hotmail.com)

C.K. Ogden’s Basic English has 850 words, just enough to communicate with a global audience. Ogden’s list along with 50 international words could define or describe any word in a dictionary. Winston Churchill was impressed but Rudolf Flesch was not.

There have been arguments for and against controlled English. I would suggest a mix of control and freedom. But before I present the details, here is a new classification of words based on vocalic length. Monovowels are words that have just one vowel letter; divowels, two vowel letters; and polyvowels, three or more vowel letters.

To find the vocalic length of a word, count all occurrences of a e i o u. Now y must also be counted if a syllable of a word has no a e i o u. Here are some examples: rhythm (monovowel; y is counted), stay (monovowel; only a is counted) youth (divowel; only o and u are counted), agony (polyvowel; a o and y are counted).

My first assumption is that polyvowels contribute to reading difficulty with the exception of those found in the Ogden’s list. My second assumption is that all monovowels and divowels are easy to read whether they be present in Ogden’s list or not. As I suggested before, let us have a mix of freedom and control: freedom to use any monovowel or divowel; and control, to use only the words in the following list of Basic Polyvowels, consisting of just 212 words from Ogden’s list:

 

about account addition adjustment advertisement agreement again against amount amusement animal apparatus approval argument association attention attitude attraction authority automatic awake (21 words)

balance beautiful because before behaviour belief between boiling building business (10 words)

camera carriage cause certain cheese chemical colour committee community company comparison competition complete computer condition connection conscious country culture curtain cushion (21 words)

damage daughter decision degree delicate dependent desire destruction detail development different digestion direction discovery discussion disease distance distribution division (19 words)

education elastic electric engine enough environment equal every example exchange existence expansion experience (13 words)

family feather feeble feeling female fertile fiction foolish frequent future (10 words)

general government guide (3 words)

harbour harmony healthy hearing helicopter heredity history hospital house humour (10 words)

idea important impulse increase industry instrument insurance interest invention (9 words)

journey (1 word)

knowledge (1 word)

language learning leather library liquid loose (6 words)

machine manager married material measure medical meeting memory military minute motion mountain (12 words)

nation natural necessary needle noise (5 words)

observation office operation opinion opposite orange organisation ornament (8 words)

parallel peace physical picture please pleasure poison political position possible potato private probable produce property punishment purpose (17 words)

quality question quiet quite (4 words)

reaction reading ready reason receipt regular relation religion representative request responsible (11 words)

science secretary selection separate serious sneeze society special square statement station structure substance suggestion surprise (15 words)

teaching technology tendency theory together tomorrow tongue trousers trouble (9 words)

umbrella (1 word)

value violent voice (3 words)

waiting weather (2 words)

yesterday (1 word)

NOTE: The Basic Polyvowel Words may be used as a spelling scale too by administering a vocalic cloze test based on this list of just 212 words.

Seven Indices Of Readability

November 8, 2013

By Nirmaldasan

(nirmaldasan@hotmail.com)

In ‘The Average Sentence Length’, I suggested that a sentence should not be measured only in words but also in syllables and letters. And I gave this rule of thumb: “Over the whole document, make the average sentence length 15-20 words, 25-33 syllables and 75-100 characters.”

Look at this sentence from M.J. Moroney’s Facts From Figures: “Most people are little removed from average intelligence, but geniuses and morons tend to occur in splendid isolation.” Words (W) = 18; Syllables (S) = 34; Letters (L) = 99. Excepting a minor syllabic transgression, Moroney’s sentence seems to flatter my rule of thumb.

These variables W, S and L are good predictors of the readability of a text. Independently and in combination, these factors constitute seven indices of readability — three are mono-variable, three di-variable and one tri-variable. Each index shows the years of schooling (1 to 17+) required to understand a particular text. 

W-Index = W/2 = 18/2 = 9

S-Index = S/3 = 34/3 = 11.3

L-Index = L/10 = 99/10 = 9.9

WS-Index = (W/4) + (S/6) = (18/4) + (34/6) = 10.2

WL-Index = (W/4) + (L/20) = (18/4) + (99/20) = 9.5

SL-Index = (S/6) + (L/20) = (34/6) + (99/20) = 10.6

WSL-Index = (W/6) + (S/9) + (L/30) = (18/6) + (34/9) + (99/30) = 10.1

Writers and teachers may choose any one of the seven indices and use it to measure the readability of any text. They may try out all the seven on different texts and heuristically choose that index which may be the most reliable.

The Words We Choose

May 23, 2013

By Nirmaldasan

(nirmaldasan@hotmail.com)

—This article appeared in the Jan-March 2013 issue of Vidura, a quarterly journal of the Press Institute of India. —

A writer who thinks and feels is a writer who knows words that engage the reader. John Ayto, in his introduction to the Bloomsbury Dictionary of Word Origins, tells us that the average English speaker knows about 50,000 words. If the print and the broadcast media function within this vocabulary-range, readership and rating points are sure to increase. But unfamiliar words have the potency to turnoff the audience.  

Edward Thorndike found that there was a relationship between familiarity and frequency. He spent about a decade preparing The Teacher’s Word Book (1921) of 10,000 words. “The list,” he writes, “makes it much easier than it has been in the past to put standards for word knowledge, by grades, by ages, or by mental ages, into clear, definite comprehensible form. For example, we may say that at a certain mental age or grade the minimum standard should be knowledge of the meanings of 95 per cent of the first 2500 words, 80 per cent of the next 1000, 60 per cent of the next 1500, and 20 percent of the next 5000.” This list he expanded to 30,000 words in 1944, teaming up with Irving Lorge.

Alfred Lewerenz discovered an unusual pattern in the frequency of words. In ‘Proposals For British Readability Measures’, Harry McLaughlin writes about him: “I have always had a soft spot in my heart for the genius who predicted readability from the percentages of words beginning w, h or b (which he considered easy) and of words beginning i or e (considered hard).” George Johnson, in ‘An Objective Method Of  Determining Reading Difficulty’, writes: “Alfred S. Lewerenz reported a study made by the Educational Research Division of the Los Angeles Public Schools. By comparing the number of different words beginning with each letter of the alphabet in a given selection with that of the standard provided by Webster’s Elementary School Dictionary, five critical letters were selected as indicators of reading difficulty. Words beginning with W, H, and B were found frequently in easy material while there were comparatively few beginning with I and E. With difficult reading material the situation was reversed.”

Edgar Dale compiled a list of 3000 words, familiar to 80 percent of 4th graders in the U.S. This list was revised in 1983 and is a factor in the new Dale-Chall readability formula of 1995. Notable among other lists are the Oxford 3000 and Voice of America’s Special English Word Book. The Oxford 3000 also includes some important and familiar words that are not frequent.

Zipf’s law

George Kingsley Zipf was also interested in word frequencies. Two of his books are The Psycho-biology Of Language (1935) and Human Behaviour And The Principle Of Least Effort: An Introduction To Human Ecology (1949). He observed that words of high frequency were usually short or became shorter with frequent use (e.g. bicycle to bike; omnibus to bus; cafeteria to cafe). Moreover, what is called Zipf’s law states that the frequency of a word in a corpus is inversely proportional to its rank. The frequency of the top-ranked word is twice that of the second-ranked word, thrice that of the third-ranked word and so on.   

Since there is a strong correlation between frequency and the length of words, it has become easier for writers to identify words that are familiar to most of their readers. The length of a word may be measured in characters or syllables. The Raygor Estimate Graph of Alton L. Raygor (1977) considers words of six or more characters difficult; the SMOG Grading of Harry McLaughlin (1969) counts polysyllables as a marker of reading difficulty. My research, presented in Readability Monitor, suggests the following measures: reading factor for print and the listening factor for broadcast.

Broadcast Listening Factor

Let P3 be the number of polysyllables in three sentences of a broadcast copy. The Broadcast Listening Factor (BLF) = P3. The lower the score, the higher the listenability. A score of zero means that the story is very easy and a score of 10+ means that it is very hard.

We will get a better estimate if we take 10 samples of three sentences each from various parts of the copy and calculate listenability. If we take just one long sample of 30 sentences, then the BLF = P30/10.

Newspaper Reading Factor

I have argued elsewhere that the average syllable has three letters; and so a polysyllable may have nine letters or more. So a long word is one that has more than eight letters.

The number of long words other than the names of persons and places in five sentences may be called the Newspaper Reading Factor. Names of persons and places are exempted from the count as they are usually supposed to be very easy to understand. This formula measures newspaper texts on a five-point scale: 0 – 4 (very easy); 5 – 8 (easy); 9 – 12 (standard); 13 – 16 (hard); and 17+ (very hard).