-
Improvement
-
Resolution: Fixed
-
Blocker
-
Future Dev
-
MOODLE_500_STABLE
-
MDL-83006-main -
-
-
-
6
-
Team Hedgehog 2024 Sprint 4.3, Team Hedgehog 2025 Sprint 1.0, Team Hedgehogs 2025 Sprint 1.1, Team Hedgehogs 2025 Sprint 1.2, Team Hedgehogs 2025 Sprint 1.3
Create an AI provider plugin that interfaces with Ollama: https://ollama.com/
Ollama is an open-source project that serves as a powerful and user-friendly platform for running LLMs on your local machine. It acts as a bridge between the complexities of LLM technology and the desire for an accessible and customizable AI experience.
This plugin will interface with the upstream AI service and will make available AI Actions to the subsystem.
Initial Supported Actions:
- Generate: Generate text content based on user prompt text
- Summarise: Summarise the provided text. Condense long text into key points. Simplify anything too complex for learners to understand.
Useful related links:
- https://github.com/ollama/ollama?tab=readme-ov-file
- https://github.com/ollama/ollama/blob/main/docs/api.md
- https://chariotsolutions.com/blog/post/apple-silicon-gpus-docker-and-ollama-pick-two/
Basic Authentication
By default Ollama does not support any authentication to secure access to the Ollama instance. The "recommended" way from the Ollama documentation and github is to use a reverse proxy like Cady or Nginx and use Basic Authentication.
Ollama Test Environments
Local
Ollama can be run locally for testing of Models, or on the same server. This approach is supported by the plugin.
AWS
For testing larger models or for anyone that doesn't have enough resources locally to run LLMs via Ollama. A test instance can be setup in AWS.
The repository and instructions on running this can be found: https://github.com/mattporritt/aws_ollama
This will set up an EC2 instance in AWS running Ollama that can be accessed publicly. This test instance also sends traffic via HTTPS for more secure data transfer and implements Basic Auth.
The EC2 instance size and storage etc can be customised.
NOTE: Running an Instance in AWS will cost money.
- caused a regression
-
MDL-85027 AI: Revert AI lang string changes and fix Ollama's endpoint
-
- Closed
-
- has a non-specific relationship to
-
MDLSITE-8082 Update Moodle docs for AI providers added in 5.0
-
- Resolved
-
- is blocked by
-
MDL-84337 Impossible to toggle the state of AI placement actions
-
- Closed
-
-
MDL-84336 AI placement errors when enabled provider doesn't implement all actions
-
- Closed
-
-
MDL-82609 AI: Provider - Action settings
-
- Closed
-
-
MDL-82980 AI: Per Model settings
-
- Closed
-
- links to