Skip to content

Quality and Performance Test Plan

We outlined quality and performance test plans for our API. Performance the tests was out of the scope of this project. We conducted an informal, manual functional test.

Functional Test

  • Does te API endpoint run?
  • Is it JSON?
  • Is the JSON structure format accurate? (brackets)
  • Is the JSON structure not empty?
    • If Mural is completely empty, does it return correct message “no sticky notes entered on this Mural”
    • If a category on the Mural is completely empty, does it return correct message if null values (ex: “no sticky notes in this category”)
  • Min and max expected values of extracted keywords (min: 1, max: 3)
    • Does it display the correct message if no keywords match and no recommendation is made? (ex: “not enough information to generate a recommendation at this time”)
  • Check expected data types
    • String
    • String starting with URL
    • List of strings
  • Are the URLs working?
  • Are there duplicated IDs?
  • Are there duplicate items in a string list?
  • Are there null values in the sticky note list in the new JSON structure? (Checking that the python code only copied sticky notes with non-null from the CSV to the new JSON structure)
  • Does it return the correct message for an invalid input (ex: a category that does not exist. User types in “fun” category, returns “This is not one of the categories” [list the 8 categories for the user])
  • If the sticky note doesn’t have enough text to make a recommendation, does it return correct message for Mural input with no recommendation matched ( ex: “no recommended content for the existing sticky notes”)
  • Does it display an alarm message if no such Mural board file exists?
  • Is the JSON in English?
  • Check the functionality of the NLP tool, did the NLP library call happen?

Performance Test

  • How long does it take to return a result? (goal <6 sec.)
  • How to test for “spamming” the button to initiate the recommendation API (if user must manually click a button to initiate the recommendation feature-impatient user)
  • Load test with constantly updating Mural file (if auto-generating recommendations when Mural is updated)
  • Load test with the number of users in the system (max possible users at one time).
  • Load test with different callers requesting the recommendation API for the same file

Testing Results

Functional Testing (manual attempt)

  • Does the API endpoint run?: Pass
  • Is it JSON?: Pass
  • Is the JSON structure format accurate? (brackets): Pass
  • Is the JSON structure not empty? Pass
    • If Mural is completely empty, does it return correct message “no sticky notes entered on this Mural” Fail
    • If a category on the Mural is completely empty, does it return correct message if null values (ex: “no sticky notes in this category”) Pass
  • Min and max expected values of extracted keywords (min: 1, max: 3) Fail
    • Does it display the correct message if no keywords match and no recommendation is made? (ex: “not enough information to generate a recommendation at this time”) Fail
  • Check expected data types: Fail
    • String
    • String starting with URL
    • List of strings
  • Are the URLs working?: Fail
  • Are there duplicated IDs?: Fail
  • Are there duplicate items in a string list?: Fail
  • Are there null values in the sticky note list in the new JSON structure? (Checking that the python code only copied sticky notes with non-null from the CSV to the new JSON structure): Pass
  • Does it return the correct message for an invalid input (ex: a category that does not exist. User types in “fun” category, returns “This is not one of the categories” [list the 8 categories for the user]): Pass
  • If the sticky note doesn’t have enough text to make a recommendation, does it return correct message for Mural input with no recommendation matched ( ex: “no recommended content for the existing sticky notes”): Fail
  • Does it display an alarm message if no such Mural board file exists? Fail
  • Is the JSON in English?: Pass
  • Check the functionality of the NLP tool, did the NLP library call happen?: Fail

Performance Testing

Not completed for this stage of development.

Remediation Plan

  • Mural limitation: If there are no sticky notes in an "area" on a Mural board, the CSV will not export the area (we designated "area" as the topic in the new structure). Therefore, our testing for a typo or topic not in the list will be inaccurate if the mentee did not enter anything in a category. To remedy:
    • Pre-populate the Mural template with empty sticky notes to try to help mitigate this. If the mentee deletes the blank sticky notes, this will still be an issue
    • Set the Mural board template so that users cannot add a new "area" to prevent errors
  • The current dataset uses fake URLs for the recommended content, future testing will test if URLs are working.
  • When the code is paired with a live version of the NLP tool:
    • Check if NLP tool call executed
    • Test for expected min and max keyword returns
    • Check if it displays the correct message if not enough keywords to make a match
  • Testing features to add:
    • Check for expected data types
    • Check for duplicates (IDs and strings)
    • Message for Mural boards with no sticky notes
    • Message if Mural board not found