Uploaded image for project: 'Moodle'
  1. Moodle
  2. MDL-83006

AI: Provider Plugin - Ollama

XMLWordPrintable

    • MOODLE_500_STABLE
    • MDL-83006-main
    • Hide

      Initial setup

      1. Login as admin.
      2. Restore the attach course backup-moodle2-course-2-c1-20240826-0815-nu.mbz
      3. Clear all curlsecurityblockedhosts and curlsecurityallowedport settings.
      4. Setup local Ollama instance: https://github.com/ollama/ollama. For instance, if you want to run your local Ollama instance using docker:

        docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

      5. Pull llama3.2 and phi4 model. If you run your local Ollama instance with docker:

        docker exec -it ollama ollama run llama3.2
        docker exec -it ollama ollama run phi4
        

      6. Login as admin.
      7. Go to AI > AI placements and enable Text editor placement.
      8. Go to AI > AI providers and disable other providers (OpenAI, Azure...)
      9. Enable the Ollama provider.

      Testing scenario 1: Text generation

      1. Go to AI > AI providers and click Setting for Ollama.
      2. Click on the Settings link for action Generate text.
      3. Use the Custom model.
      4. Enter llama3.2 to Custom model name.
      5. Save changes.
      6. Edit your profile.
      7. Click on the AI button in the TinyMCE text editor.
      8. Select AI Generate Text.
      9. Agree with the Policy (If any).
      10. Input some text in the Describe the text you want AI to create. Example: Write a short introduction for Moodle LMS.
      11. Press Generate Text.
      12. Verify you will see the Generated text on the right.

      Testing scenario 2: Summarise action

      1. Go to AI > AI providers and click Setting for Ollama.
      2. Click on the Settings link for action Summarise text.
      3. Use the Custom model.
      4. Enter llama3.2 to Custom model name.
      5. Save changes.
      6. Navigate to the test course.
      7. Navigate to Grandma's Kimchi activity.
      8. Click on the AI features dropdown list and choose Summarise.
      9. Agree with the Policy (If any).
      10. Verify you will see the Summarise content on the right.

      Testing scenario 3: Explain action

      1. Go to AI > AI providers and click Setting for Ollama.
      2. Click on the Settings link for action Explain text.
      3. Use the Custom model.
      4. Enter llama3.2 to Custom model name.
      5. Save changes.
      6. Navigate to the test course.
      7. Navigate to Grandma's Kimchi activity.
      8. Click on the AI features dropdown list and choose Explain.
      9. Agree with the Policy (If any).
      10. Verify you will see the Explained content on the right.

      Repeat all the tests (1, 2 and 3) with phi4 model instead of llama3.2

      Show
      Initial setup Login as admin. Restore the attach course backup-moodle2-course-2-c1-20240826-0815-nu.mbz Clear all curlsecurityblockedhosts and curlsecurityallowedport settings. Setup local Ollama instance: https://github.com/ollama/ollama . For instance, if you want to run your local Ollama instance using docker: docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama Pull llama3.2 and phi4 model. If you run your local Ollama instance with docker: docker exec -it ollama ollama run llama3.2 docker exec -it ollama ollama run phi4 Login as admin. Go to AI > AI placements and enable Text editor placement. Go to AI > AI providers and disable other providers (OpenAI, Azure...) Enable the Ollama provider. Testing scenario 1: Text generation Go to AI > AI providers and click Setting for Ollama. Click on the Settings link for action  Generate text . Use the Custom model. Enter llama3.2 to Custom model name. Save changes. Edit your profile. Click on the AI button in the TinyMCE text editor. Select AI Generate Text. Agree with the Policy (If any). Input some text in the Describe the text you want AI to create. Example: Write a short introduction for Moodle LMS. Press Generate Text. Verify  you will see the Generated text on the right. Testing scenario 2: Summarise action Go to AI > AI providers and click Setting for Ollama. Click on the Settings link for action Summarise text . Use the Custom model. Enter llama3.2 to Custom model name. Save changes. Navigate to the test course. Navigate to Grandma's Kimchi activity. Click on the AI features dropdown list and choose Summarise. Agree with the Policy (If any). Verify  you will see the Summarise content on the right. Testing scenario 3: Explain action Go to AI > AI providers and click Setting for Ollama. Click on the Settings link for action  Explain text . Use the Custom model. Enter llama3.2 to Custom model name. Save changes. Navigate to the test course. Navigate to Grandma's Kimchi activity. Click on the AI features dropdown list and choose Explain. Agree with the Policy (If any). Verify you will see the Explained content on the right. Repeat all the tests (1, 2 and 3) with phi4 model instead of llama3.2
    • Show
      Fails against automated checks. Checked MDL-83006 using repository: https://github.com/HuongNV13/moodle.git main (3 errors / 0 warnings) [branch: MDL-83006-main | CI Job ] overview (0/0) , phplint (0/0) , phpcs (3/0) , js (0/0) , css (0/0) , phpdoc (0/0) , commit (0/0) , savepoint (0/0) , thirdparty (0/0) , externalbackup (0/0) , grunt (0/0) , shifter (0/0) , mustache (0/0) , gherkin (0/0) , Should these errors be fixed? Built on: Mon Mar 24 15:50:00 UTC 2025
    • Hide

      Launching automatic jobs for branch MDL-83006-main-main

      Built on: Tue Jan 28 06:30:39 AM UTC 2025

      Show
      Launching automatic jobs for branch MDL-83006 -main-main https://ci.moodle.org/view/Testing/job/DEV.02%20-%20Developer-requested%20PHPUnit/18085/ PHPUnit (sqlsrv / --testsuite core_ai_testsuite,aiprovider_openai_core_ai_testsuite) https://ci.moodle.org/view/Testing/job/DEV.01%20-%20Developer-requested%20Behat/61797/ Behat (NonJS - boost and classic / --tags @core_ai) https://ci.moodle.org/view/Testing/job/DEV.01%20-%20Developer-requested%20Behat/61798/ Behat (Firefox - boost / --tags @core_ai) https://ci.moodle.org/view/Testing/job/DEV.01%20-%20Developer-requested%20Behat/61799/ Behat (Firefox - classic / --tags @core_ai) https://ci.moodle.org/view/Testing/job/DEV.01%20-%20Developer-requested%20Behat/61800/ App tests (stable app version) / --tags @core_ai) Built on: Tue Jan 28 06:30:39 AM UTC 2025
    • 6
    • Team Hedgehog 2024 Sprint 4.3, Team Hedgehog 2025 Sprint 1.0, Team Hedgehogs 2025 Sprint 1.1, Team Hedgehogs 2025 Sprint 1.2, Team Hedgehogs 2025 Sprint 1.3

      Create an AI provider plugin that interfaces with Ollama: https://ollama.com/

      Ollama is an open-source project that serves as a powerful and user-friendly platform for running LLMs on your local machine. It acts as a bridge between the complexities of LLM technology and the desire for an accessible and customizable AI experience.

      This plugin will interface with the upstream AI service and will make available AI Actions to the subsystem.

      Initial Supported Actions:

      • Generate: Generate text content based on user prompt text
      • Summarise: Summarise the provided text. Condense long text into key points. Simplify anything too complex for learners to understand.

      Useful related links:

      Basic Authentication

      By default Ollama does not support any authentication to secure access to the Ollama instance. The "recommended" way from the Ollama documentation and github is to use a reverse proxy like Cady or Nginx and use Basic Authentication.

      Ollama Test Environments

      Local

      Ollama can be run locally for testing of Models, or on the same server. This approach is supported by the plugin.

      AWS

      For testing larger models or for anyone that doesn't have enough resources locally to run LLMs via Ollama. A test instance can be setup in AWS.

      The repository and instructions on running this can be found: https://github.com/mattporritt/aws_ollama

      This will set up an EC2 instance in AWS running Ollama that can be accessed publicly. This test instance also sends traffic via HTTPS for more secure data transfer and implements Basic Auth.

      The EC2 instance size and storage etc can be customised.

      NOTE: Running an Instance in AWS will cost money.

       

       

        1. backup-moodle2-course-2-c1-20240826-0815-nu.mbz
          11 kB
          Huong Nguyen
        2. (1) 12 Passed -- (Main)MDL-83006.png
          77 kB
          Kim Jared Lucas
        3. (2) 10 Passed -- (Main)MDL-83006.png
          244 kB
          Kim Jared Lucas
        4. (3) 10 Passed -- (Main)MDL-83006.png
          303 kB
          Kim Jared Lucas
        5. MDL-83006_phi4_1_step12.png
          85 kB
          Sara Arjona (@sarjona)
        6. MDL-83006_phi4_setup.png
          43 kB
          Sara Arjona (@sarjona)
        7. MDL-83006_phi4_2_step10.png
          237 kB
          Sara Arjona (@sarjona)
        8. MDL-83006_phi4_3_step10.png
          260 kB
          Sara Arjona (@sarjona)

            huongn@moodle.com Huong Nguyen
            matt.porritt@moodle.com Matt Porritt
            Meirza Meirza
            Sara Arjona (@sarjona) Sara Arjona (@sarjona)
            Kim Jared Lucas Kim Jared Lucas
            Votes:
            28 Vote for this issue
            Watchers:
            27 Start watching this issue

              Created:
              Updated:
              Resolved:

                Estimated:
                Original Estimate - 0 minutes
                0m
                Remaining:
                Remaining Estimate - 0 minutes
                0m
                Logged:
                Time Spent - 1 week, 3 days, 3 hours, 56 minutes
                1w 3d 3h 56m

                  Error rendering 'clockify-timesheets-time-tracking-reports:timer-sidebar'. Please contact your Jira administrators.