Welcome to this comprehensive four-part series on developing and publishing your own Nextflow plugins!
 
Nextflow plugins allow you to extend the core functionality of Nextflow, making your pipelines more powerful,
flexible, and integrated with external systems. Whether you’re looking to add custom executors, integrate cloud services,
or enhance reporting, this series has you covered from initial concept to final release.
 
Use the links below to easily navigate the entire series:
 
Moving beyond the basic setup, this post dives into how to define, manage, and access custom configuration within your plugin, making it truly adaptable for various user environments.
 
To have a more interesting plugin, we’ll create a new nf-llm plugin following the steps explained in the previous post.
 
This new plugin will allow the user to store messages in an Embedded store and "chat" with an LLM at the end of the
pipeline about the messages collected during the execution
 
For example, a simple pipeline can be
 
include { addMessage; chat } from 'plugin/nf-llm'
channel.of( 'hi',
            'this is a simple sentence generated at '+new Date(),
            'dont be shine and give me a hi'
        )
        .subscribe(
                onNext: { v ->
                    addMessage v
                },
                onComplete: {
                    println chat("Generate a brief of the conversation")
                })
 
 
Imagine instead a simple channel of strings you can call addMessage in every process executed and once completed
the pipeline you can request to the LLM to generate a report, analyze times, …
 
For the sake of simplicity, our plugin will use Google Gemini (but can be easily replaced with OpenAI, Ollama, etc),
so the user needs to provide his apiKey in the nextflow.config . Also, the user will be allowed to specify
which model to use in their pipeline:
 
plugins {
    id "nf-llm@0.1.0"
}
llm{
    model = "gemini-2.5-flash"
    apiKey = "AIzaSy------"
}