Talk to your Fedora system with the linux-mcp-server!

Talk to your Fedora system with the Linux MCP Server!

Generative AI systems are changing the way people interact with computers. MCP (model context protocol) is a way that enables LLMs to run commands and use tools to enable live, conversational interaction with systems. Using the new linux-mcp-server, let’s walk through how you can talk with your Fedora system for understanding your system and getting help troubleshooting it!

Introduction

 Large language models (LLMs) can be an invaluable tool when investigating an issue on a Linux system. However, this can involve a lot of copy/pasting of information from the Linux terminal into a web based interface to an LLM model.  

The model context protocol (MCP) acts as a bridge, enabling LLMs to interact with external tools and data sources.  The linux-mcp-server utilizes this protocol to give LLMs the ability to interact with a Fedora Linux system.  Instead of you manually copying and pasting terminal output, the linux-mcp-server enables the LLM to directly query system information and log entries.  

By enabling an LLM direct access to system information and logs, it is transformed into an active part of the investigation process when troubleshooting an issue.  It empowers an LLM to directly query the system state, allowing it to help identify performance bottlenecks, and identify important log entries that might be missed by a manual review.  

What is the model context protocol (MCP)?

Anthropic introduced MCP in November 2024 as an open standard for LLM tool use. This provides a way for LLMs to interact with outside systems and data sources.  

Prior to MCP,  there wasn’t as strong a standard and ecosystem for LLM systems to call tools. LLMs were thus frequently limited to have only the information contained in their training. They were isolated from the outside world.  For example, if you asked an LLM “what is the weather going to be next week”, the LLM would respond with a message indicating that it doesn’t know what the weather will be, as it doesn’t have access to that information. MCP helps solve this problem by enabling a standardized way for an LLM to access an outside data source, such as the weather forecast.   

At a high level, users can use an AI agent application, such as Goose (open source), or Claude Desktop, and specify which MCP servers they would like to use.  The AI agent application informs the LLM that there are tools available via these MCP servers that can be used to help answer the requests from the user. The LLM model can then decide when to invoke these tools.  

MCP is an open standard. You have the flexibility to use MCP servers, such as linux-mcp-server, with either open source-licensed LLM models, or hosted proprietary LLM models. 

What is the linux-mcp-server?

The linux-mcp-server is a project started by Red Hat’s RHEL Engineering team.  It provides a number of tools that enable an LLM to query information from a Linux system, such as system info, service information and logs, process information, journald and other logs, network information, and storage and disk information.  For a full list of the tools provided, refer to the project’s Github page.  

These tools, provided by linux-mcp-server, are focused on providing the LLM access to read-only information.  In the future, we’ll be exploring expanding past these read-only use cases.  

The linux-mcp-server can be used to interact with the local Fedora Linux system that it is running on. It can also be used to interact with remote Fedora Linux systems over SSH.  For example, if you have SSH key authentication setup with the remote systems, you could make a request to your AI agent application such as “Determine the current memory usage on the fedora1.example.com, fedora2.example.com, and fedora3.example.com servers”.  

Prerequisites

The main components needed are an AI agent application, access to LLM model inference, and the linux-mcp-server. 

There are a number of options for the AI agent application, both open source and proprietary.  An example of an open source AI agent is Goose, which provides an RPM that can be installed on Fedora.

There are several LLM model inference options, ranging from locally hosted open source models, to proprietary hosted LLM models.  If hosting a model locally, you generally need to have GPU/accelerator hardware available.  Open source tools that can be used to locally host LLM models include RamaLama, llama.cpp, and Ollama. There are a number of open source LLM models capable of tool calling. These include models such as gpt-oss-20b and Granite 4 small.

Once you’ve installed your preferred AI agent application, the next step is to install the linux-mcp-server and configure your AI agent application to use it. The linux-mcp-server Github page includes installation instructions, and configuration details for Goose and Claude Desktop.

Use cases for linux-mcp-server

⚠️ NOTE

The example scenarios below include LLM generated output. The output is presented here as it was generated by the AI systems cited so you can assess the quality of the output they are capable of, particularly in a non-cloud, self-hosted scenario.

The sample LLM output provided here is not intended to be run by readers without the same level of review we recommend anyone perform on AI generated content prior to use on their systems.

Dear Fedora, why are you so slow today?

Has this happened to you? Time to hop on a call and give a demo. You’re pumped. It’s your go-time aaaaand… your system starts lagging. Folks on the call say your video is choppy or they can’t understand you. OH NO.

You’ve got to assess what’s up, quickly. Run `top`, check `dmesg`, check your network connection (read the man page for `netstat` because you can’t remember the flag you want), check `journalctl`, cross-reference what you’re seeing, then start killing processes, restarting services, reloading kernel modules willy-nilly based on hastily-crafted theories, see if it improves… 

This is a case where you could actually ask the system to tell you why it’s slow and give it the ability to do the analysis via linux-mcp-tools. The LLM you connect with won’t just run the commands through the MCP server; it will understand what the numbers it gets means in context and be able to explain in human terms.

So let’s start working with a relatively new laptop. It’s a Thinkpad T14S with the AMD Ryzen™ AI Pro 7  ( Ryzen™ AI 360 chip.) Recently one of the wifi networks it must connect to regularly was upgraded. It now has  issues with the  network connection stability. So let’s talk to Goose + gpt-oss:20b +  linux-mcp-server about it. Here’s the initial prompt I used:

My Wifi doesn't work very well. It cuts out after a few minutes. Help me fix it. Can you find the error messages in the system logs and diagnose the problem?

It responded with the following:

I'll help you diagnose your WiFi connectivity issues by examining the system logs for error messages related to your network interfaces and wireless connectivity. Let me start by gathering information about your system and checking the relevant logs.

🤖 LLM output 🤖

I’ll help you diagnose your WiFi connectivity issues by examining the system logs for error messages related to your network interfaces and wireless connectivity. Let me start by gathering information about your system and checking the relevant logs.

It then made a series of tool calls, including Get System Information, Get Network Interfaces, Get Journal Logs, Get Hardware Information, Get Service Status service_name:NetworkManager, Get Service Status service_name:wpa_supplicant, Read Log File. The result was this analysis, which is the actual problem I was hoping it could find!


🤖 LLM output 🤖

Problem Summary – AT‑H12K PCI Wi‑Fi Issue

⚡ Problem Summary

Your ATH12K PCI (Qualcomm 802.11ax) wireless card repeatedly reports


ath12k_pci 0000:c2:00.0: failed to enqueue rx buf: -28

every few minutes, which triggers a drop of the wireless link. The driver is trying to enqueue an RX buffer for the hardware but fails with errno ‑28.

‑28

-ENOMEM
(out‑of‑memory for the driver buffer); the kernel cannot allocate a DMA‑coherent RX descriptor.

✅ Likely Root Causes

CauseEvidence / Typical ScenarioFix / Work‑around
Driver bug or missing patch The kernel’s ath12k driver has had a long‑running issue with RX ring management in older 5.15/5.4 series. Update the kernel or apply the latest patch from the
linux
tree (5.18+ or the
ath12k
backports).
Out‑of‑memory in the driver The error itself is “‑28” – a memory‑allocation failure inside the driver. Reduce the number of RX descriptors, or enable fast‑RX disabled mode (
modprobe ath12k_pci ath12k_pci_disable_fast_rx=1
).
Firmware mismatch / corruption Old firmware (
ath12k_hwgen
files) don’t match the driver version.
Reinstall firmware from your distro (
sudo apt‑install --reinstall linux‑firmware
) or copy the latest from kernel source.
System memory pressure If many big processes run, DMA‑coherent memory pool can be exhausted. Free RAM, close memory‑heavy apps, or add swap.
ACPI/PCIe power‑management quirk Some laptops throttle the Wi‑Fi device after idle, resetting the buffer queue. Add
pci_bus_reset = off
or set
ath12k_pci_disable_ps=1
in a modprobe conf.
Hardware issue (e.g., faulty PCIe slot or NIC) Persistent failure after multiple driver/kernel upgrades. Test on another machine/PCIe slot or replace the card.

🔎 Diagnostic Checklist

  1. Kernel & Driver Version
    
    
    uname -r
    modinfo ath12k_pci | grep ^version:
  2. Driver Parameters
    
    
    cat /sys/module/ath12k_pci/parameters/*

    – See if

    ath12k_pci_disable_fast_rx
    or similar is already set.

  3. Firmware Location
    
    
    ls /lib/firmware/ath12k/ath12k/ar9009/ | grep wlan0 | head

    – If the files are missing or older than the driver, copy newer ones from

    /lib/firmware
    of a newer kernel.

  4. Kernel Log Around the Error
    
    
    sudo dmesg --ctime | grep -A3 'ath12k_pci 0000:c2:00.0'

    – Observe if there are any other “-ENOMEM” or “resource exhaustion” messages.

  5. Memory Usage Stats
    
    
    free -h
    vmstat 1 5
  6. PCIe Latency / Link State
    
    
    sudo lspci -vvv -s c2:00.0 | grep -iE "link|latency"

🚀 Quick‑Fix Path (Try in this order)

StepCommandWhat it does
1

sudo apt update && sudo apt upgrade linux-firmware
Refresh firmware to the latest distro version.
2

sudo apt upgrade linux-image-$(uname -r)
Apply latest kernel with patched ath12k driver (or switch to 5.18+ if still on 5.15).
3

echo "ath12k_pci ath12k_pci_disable_fast_rx=1" | sudo tee /etc/modprobe.d/ath12k.conf
Force the driver to use a more conservative RX scheme.
4 Reboot and monitor:
dmesg -T | grep ath12k_pci
Verify the error disappears.
5 Optional: test on another machine or use a USB‑WiFi dongle (USB 802.11ax / 802.11ac) as a temporary workaround. Ensure the issue is not hardware‑specific.

**Tip:** After reboot, you can also verify driver parameters:

cat /sys/module/ath12k_pci/parameters/ath12k_pci_disable_fast_rx

📺 Advanced Debugging (if basic fixes don’t help)

  1. Enable debug logs
    
    
    sudo bash -c 'echo 1 > /sys/module/ath12k_pci/parameters/debug'
    sudo dmesg -T | grep ath12k
  2. Check PCIe device state
    
    
    sudo lspci -vv -s c2:00.0
  3. Use
    ethtool
    on the Wi‑Fi interface
    
    
    sudo ethtool -S wlp4s0
  4. Look for firmware logs
    
    
    grep -i 'ath12k' /var/log/syslog

📌 Bottom Line

The “failed to enqueue rx buf: –28” is an internal driver allocation failure that typically follows one of two paths:

  1. The driver or firmware is out of date → upgrade kernel/firmware.
  2. The driver configuration is too aggressive for your system’s memory budget → tweak module parameters or free RAM.

If you're still stuck after following the quick‑fix path, it's worth capturing more detailed logs with the above diagnostic commands and sharing them. We can then cross‑reference the exact kernel version/driver commit to see if a known patch exists.

Feel free to let me know which step you’ve tried or if you encounter any new log messages, and we’ll dig deeper!


Not too bad an analysis and set of debugging steps! I’ll follow up in the comments and let you know if the solution works.

Where’d my disk space go, Fedora?

Trawling through `du` on the command line, converting the numbers in your head, sorting them – it can be tedious. We even have graphical tools like `baobab`, but it takes time to load in the results and explore the data. And sometimes you might have questions it can’t answer in terms of the risk of deleting a specific directory, or the age of a specific file. It’s easier to be sloppy and just let the disk fill up. But there comes a time where that is no longer an option.


You can ask the system using the linux-mcp-server to analyze your disk usage, give you a hitlist of the top space-eating places on the file system, and give you advice with a risk analysis of how to clean it up.

Here’s my prompt:

I need help analyzing disk usage on my system. Can you:

1. Show me the current disk usage across all filesystems using get_disk_usage
2. Use list_directories_by_size to show me the 10 largest directories (this will help identify where disk space is being consumed)
3. Check if there are any large log files consuming space using get_journal_logs or read_log_file
4. Review running processes with list_processes to see if any are consuming excessive resources or creating temporary files

After gathering this information, help me identify potential space-saving opportunities and assess the safety of cleaning up specific directories. Please don't only examine top-level directories. I want you to go deep if needed to find out where the largest files are!

I ran this prompt on my Fedora 42 system using linux-mcp-server with Goose connected to locally-served gpt-oss:20b, and here’s the output I got:


🤖 LLM output 🤖

Disk‑space overview (as of Nov 24 2025 15:21)

FilesystemSizeUsedAvailUse %Mounted on
/dev/mapper/luks‑61cedb0e…952.3 GB548.9 GB399.6 GB57.9 %/
/dev/mapper/luks‑61cedb0e…952.3 GB548.9 GB399.6 GB57.9 %/home
/dev/nvme0n1p2973.4 MB462.8 MB443.4 MB51.1 %/boot
/dev/nvme0n1p1598.8 MB19.3 MB579.6 MB3.2 %/boot/efi

Overall usage is comfortably below 60 % of a 952 GB volume, so immediate capacity pressure isn’t a concern. The biggest contributors to disk space today are in the ~/. hierarchy.


1. Biggest directories on the system

Path (relative to the mount point)Size (≈)Notes
~/ – User home (2 mounts: /home + /)1 TB total57 % used; the bulk of the space is in ~/

| ~/duffy/.local/share | 238 GB | Containers sub-dir, followed by Trash (7GB) – typical podman/docker container images |
| ~/duffy/.local/share/containers | 238 GB | Container layers, images, root filesystems – the prime candidate for cleanup |
| ~/duffy/.local | 266 GB | share (237 GB) dominates it |
| ~/duffy/.local/share/containers (inside share) | 238 GB | Same as above – just a different view |
| ~/duffy/.local/lib | 13 GB | Likely a virtual‑environment library set |
| ~/duffy/.local/bin | 4 GB | Helper binaries |
| ~/duffy/.local/share/Trash | 7 GB | User‑deleted items still resident on disk |
| ~/duffy/.local/share/containers (nested) | 238 GB | Same as above; redundant listings from recursive depth‑1 scans |
| ~/duffy/.local/share (top‑level) | 238 GB | containers is the absolute largest item |

Take‑away – 75 % of the 238 GB live inside three nested directories:

  • ~/duffy/.local/share/containers (238 GB) → Rooted in Podman/Container‑Store
  • ~/duffy/.local/share (238 GB) → Same data exposed at a higher level
  • ~/duffy/.local/share/Trash (7 GB)

2. Possible space savers

CategoryRecommendationRationale
Containerspodman system prune -a (or docker system prune -a if Docker is in use)Removes unreferenced images, containers, and caches. This can reclaim 200–250 GB if most containers are outdated or no longer referenced.
Trashgio trash –emptyThe 7 GB in Trash is still taking physical space; simply emptying it will free that amount.
Large libs/binariesCheck ~/duffy/.local/lib – 13 GB, and ~/duffy/.local/bin – 4 GBIf these are not actively used by a project, consider uninstalling or moving them to a secondary, less‑frequented store.
Ramalama & UV9 GB (ramalama), 8 GB (uv)Possibly temporary files or caches from development tools. Inspecting the contents of these directories (du -h) can reveal if they are still needed.
Other cachespnpm store prune, virtualenv or pip cache purgeDevelopment tool caches often accumulate; they’re usually safe to delete once you’re done with the current development sessions.
LogsThe system journal shows normal activity, and syslog access is blocked by policy.Systemd journal stores are rotated automatically; no urgent action required.
Boot partitionsNot relevant – they’re low‑usage (boot/efi only 3 % used).No action needed.

3. Disk‑IO perspective

  • Reads: 111.8 GB; Writes: 85.6 GB since boot (≈ 1.2 TB total).
  • If writes are high relative to free space, consider enabling a tmpfs overlay for temporary files or moving large data to a separate, faster SSD. But in this case, writes are not approaching the 399 GB free.

4. Suggested cleanup workflow

# 1. Empty Trash

gio trash --empty

# 2. Prune unused containers/images

podman system prune -a   # or docker system prune -a

# 3. Remove leftover caches

pnpm store prune         # if pnpm is installed
pip cache purge          # for Python
virtualenv --clean       # optional

# 4. Inspect large “RAM” directories manually if you need deeper cleanup

du -h --max-depth=1 ~/duffy/.local/share/containers

Tip: After each step run df -h /home or du -sh ~/duffy/.local/* to see the impact.


5. Safety checklist

ActionWhy it’s safe (or risk)
Emptying TrashDeletes only user‑deleted files; confirm you don’t need the content.
podman system prune -aDeletes container images, containers, and volumes that are not referenced by any running or stopped container. Verify with podman ps -a before pruning.
Removing lib/bin from .localOnly safe if you’re sure those libraries/bin files aren’t part of a running application.
Caution with tmpfs or moving large directoriesIf you move a directory on‑the‑fly, ensure no running application depends on that path.

Bottom line

  • Current free space: ≈ 400 GB on both / and /home.
  • Largest consumptions: User‑space under ~/duffy/.local/share/containers (≈ 238 GB).
  • Actionable clean‑ups: Empty Trash, prune unused container images, and review large libraries/binaries. This should reclaim over 200 GB and bring the usage from 57.9 % down to roughly 40‑45 % – plenty of headroom for the future.

Feel free to ask what next steps you’d like to dig into (e.g., detailed contents of the containers dir, specific application logs, or targeted pruning commands).


Upgrade planning

So you may have noticed a little detail above – the system is a Fedora 42 system… and Fedora 43’s been out a month now! 😅 So, time to upgrade.

This example shows where we have some tools missing from the set provided in the linux-mcp-server. We’re including it for two reasons:

  • So you can see how this works manually
    You can see that even when specific tools you might not need are available in the MCP server, you can have the response give you instructions on commands to run on your own, and copy/paste the command output back into your chat to get analysis alongside the data the system is able to pull via the MCP tool calls;
  • To encourage you to submit additional tools to our project!
    We would love your additions to the project! Here’s where to get started: https://github.com/rhel-lightspeed/linux-mcp-server/blob/main/docs/CONTRIBUTING.md

Here’s the prompt I started with, with the same Goose + gpt-oss:20b + linux-mcp-server combination:

You are a Linux system administrator assistant analyzing a Fedora system for upgrade readiness.

TASK: Examine this Fedora 42 system and provide a comprehensive upgrade readiness report for Fedora 43.

ANALYSIS CHECKLIST:
1. Check current Fedora version: cat /etc/fedora-release
2. Review system updates status: dnf check-update
3. Identify third-party repositories: dnf repolist
4. List installed packages from non-Fedora repos: dnf list installed | grep -v @fedora
5. Check for broken dependencies: dnf check
6. Review disk space on root partition: df -h /
7. Check for unsupported or deprecated packages
8. Identify custom kernel modules: lsmod | grep -v "^Module"
9. Review SELinux status: sestatus
10. Check for pending system updates: dnf upgrade --refresh --assumeno

REPORT FORMAT:

# Fedora 43 Upgrade Readiness Report

## Current System Status
- Fedora version: [version]
- Kernel: [kernel version]
- Updates status: [current/outdated]

## Potential Issues
[List blocking issues with HIGH/MEDIUM/LOW severity]

## Third-Party Software
[List non-Fedora packages that may need attention]

## Recommendations
[Specific pre-upgrade steps needed]

## Overall Readiness: [READY/NEEDS ATTENTION/NOT READY]
Run the necessary commands and provide this analysis. Be specific about any issues found and give actionable recommendations.

Now, right away the model came back to me to complain it doesn’t have access to `dnf`, `cat`, etc. And that’s expected here. What it did was give me a list of homework to run for it to complete the analysis… for example, `dnf check-update` and `cat /etc/fedora-release`. I had a little back and forth in Goose with the model where it would ask me to run a command and I’d copy the output into the context. This resulted, finally, in the following report:


🤖 LLM output 🤖

Fedora 43 Upgrade Readiness Report

Analyzed System – Fedora 42 (Kernel 6.17.7‑200.fc42.x86_64)
Date – 2025‑11‑24


1. Current System Status

ItemValue
Fedora release42 (Adams)
Kernel6.17.7‑200.fc42.x86_64
Root filesystem953 GB LUKS, 58 % used (411 GB free) – > 15 % free
/var /usrSame as root partition (mounted the same LUKS device)
SELinuxenabled (policy targeted), currently permissive
System‑wide packagesNo held packages (dnf versionlock list empty)
System modulesNo modules held.

TL;DR – The base system is clean; the kernel is up‑to‑date for 42, there is ample disk space, and there are no version‑locked packages.


2. Third‑Party Repos & Packages

RepositoryDescriptionCurrent dnf repolist status
rpm-fusion-free, rpm-fusion-nonfree, rpm-fusion-updatesWidely used RPM‑Fusion componentsEnabled
google-cloud-cliGoogle Cloud SDK (RPM built for EL 10)Enabled
visual-studio-codeMicrosoft VS Code RPMsEnabled
command‑line‑assistant, google‑chrome, PyCharm, unsupported‑fedora‑packages, goose, leaktkVarious Copr builds (mostly Fedora‑42 based)Enabled

What we need to verify

  1. Is a Fedora 43 build available for each of the non‑Fedora repos above?
  2. Does google-cloud-cli have a Fedora 43‑compatible package? (EL 10 build may not be usable).
  3. Are there any el10‑specific dependencies that could surface during the upgrade?
  4. Any custom kernel modules that will require re‑compilation against the 43 kernel.

3. Potential Issues & Severity

IssueSeverityWhy it mattersImmediate Remedy
Non‑Fedora packages (e.g., google‑cloud‑cli)MEDIUMMay contain packages built for EL 10; could conflict with Fedora 43 libs.Verify if a Fedora 43 build exists or plan to remove/re‑build it.
Third‑party repos without 43 buildsMEDIUMdnf system-upgrade pulls all enabled repos; lacking 43 packages causes transaction failure.Disable such repos temporarily (dnf config-manager –set-disabled <repo>) or enable the 43 stream when available.
Custom kernel modulesLOWModules compiled against 42 kernel won’t load on 43; services may drop.Re‑compile or patch modules after upgrade.
Broken dependencies reported by dnf checkHIGHWould block the upgrade transaction.Run sudo dnf check now; fix any reported issues.
Insufficient disk spaceHIGHUpgrade scripts need > 15 % free.Not an issue (58 % free).

Current High‑Risk Items – None detected yet.
Next Action – Confirm each third‑party repo has 43‑compatible packages and set SELinux to enforcing.


4. Recommendations (Pre‑Upgrade Checklist)

StepCommandPurpose
Confirm 43‑stream packages`dnf list –available *grep <repo-id>`
Verify SELinux enforcementsestatusShould show enforcing.
Check for broken dependenciessudo dnf checkNo errors should appear.
Refresh all repossudo dnf upgrade –refreshPull latest metadata.
Dry‑run upgradesudo dnf system-upgrade download –releasever=43 –allowerasingDetect any missing packages or conflicts.
Optional: Disable problematic repossudo dnf config-manager –set-disabled <repo>Prevent upgrade crash if a repo has no 43 build.
Persist SELinux configsudo sed -i ‘s/SELINUX=permissive/SELINUX=enforcing/’ /etc/selinux/configMake enforcement permanent.

5. Post‑Upgrade Actions

  1. Reboot into the new kernel (dnf system-upgrade reboot).
  2. Re‑compile any custom kernel modules against the new kernel (yum install kernel-devel-$(uname -r) then rebuild modules).
  3. Re‑enable any disabled repos once the upgrade completes.
  4. Verify critical services (systemctl status <service>) and check logs.
  5. Run sudo dnf distro-sync to ensure all packages are the latest from Fedora 43.

6. Overall Readiness

CriteriaStatusComment
Disk spaceReady58 % free
Package healthReadyNo held or broken packages
Third‑party reposConditionalVerify 43‑compatibility
Custom modulesPendingNeed to identify modules and re‑compile
Final verdictREADY (after addressing above points)

Next step – Run the three verification commands below and share the output with me:

sudo dnf check
sudo dnf repolist
dnf list installed | grep -v @fedora

With those outputs I can pinpoint any remaining blockers and give you a final, ready‑to‑proceed plan.

Happy upgrading!


Give it a try, let us know what you think!

You can see from these few examples that tool calling with LLMs is a valuable tool for troubleshooting Linux systems. We could use your help building this and making it awesome! How can you help?

  • Give it a try, let us know what you think, file any bugs you find, and let us know what tools are missing that you could use! You can respond in the comments here, file a bug, or chat with us on Fedora matrix.
  • We’re primarily working with Goose as our client, but if you have another preferred client and want to help us support it better, we’d love to work with you!
  • Get involved in the linux-mcp-project generally, we’d love to see your PRs!
  • Let us know what you’d like to see in the future. What workflows would you like to see supported? How do you see this making your Fedora or overall Linux experience better? What larger workflows do you see this plugging into?

Join us upstream at https://github.com/rhel-lightspeed/linux-mcp-server! And come chat with us in the Fedora AI/ML SIG chat room on Matrix!

Fedora Project community For System Administrators

37 Comments

  1. Spinster

    AI personification is a huge issue with dire health consequences.
    Please change the title of the article to something like “Check your system health (with the linux-mcp-server!)”
    The fedora system is not a person, it does not “feel”, or think. AI or otherwise.

  2. Sanqui

    sudo apt update? This is Fedora, not Debian. The AI can’t even get the OS right. Embarrassing. I’m sad to see slop like this peddled on this blog.

    • I used gpt-oss for that task, and the results otherwise were pretty impressive. If I had the space to make the post longer (it’s already quite long 🙂 ) I could show how to push back on it and tell it apt isn’t a Fedora tool. I had planned to record a screencast for this to show more of a back and forth / conversational version of the prompts I got but ran out of time. It was important to me to show exactly what came out of the model – definitely smaller models have limitations, and there are techniques to improve them and it’s something I’d love to work on together with the Fedora community!

      • Greg

        If its recommending the wrong tools out of the gate, we should not trust that after multiple corrections that its giving sound results to trust running anything as Sudo.

        If a user knows that the recommendation is wrong, they wouldn’t be asking an LLM to begin with.

        • Hey Greg! If you’re in the troubleshooting scenario I outlined – if you didn’t know apt doesn’t work, you’d find out pretty quickly after trying it. In the same way, asking people questions and asking people for help, sometimes you get wrong answers or misunderstandings. There is a pattern of human feedback loops with AI assistants like this where you can simply ask the AI if it is sure, or request that it review its answer before giving you the answer (and sometimes this is built into system prompts – in my set up, I didn’t do this but it’s certainly something that could be made automatic on the client end) and it provides more accurate answers. (e.g. https://arxiv.org/html/2503.00172v1 + https://arxiv.org/html/2503.15850v2 are some papers that examine this.) I could have certainly also set up a system prompt to instruct the LLM to favor Fedora-specific answers, or even added a RAG tool that it could consult built from Fedora’s documentation to make sure the answers were Fedora-specific. I didn’t, and these are systems that can be built, but a lot of this stuff is early stages and my team is working to make it more usable and accessible for community members and projects.

          • Four

            What if the command was unsafe and damaged your system? It would be a little difficult to ask again.

            • This is a really excellent question, Four. So a couple of things:

              1. The linux-mcp-server is currently read-only.
              2. You can view the commands here – the approach we are taking is different than other tools which just generally give the LLM full shell access. We are only allowing access to a specific set of commands that are pre-vetted. And for instance, we have an allow list for the log analysis tool, so if a given log it wants to read isn’t in the allow list, it doesn’t get access to it. You can take a look at how the tools are implemented here:
                https://github.com/rhel-lightspeed/linux-mcp-server/tree/main/src/linux_mcp_server/tools

              So for instance, let’s look at the log tool get_journal_logs() –
              https://github.com/rhel-lightspeed/linux-mcp-server/blob/223a1d0b640b1568ad5107166f75de034c9d20eb/src/linux_mcp_server/tools/logs.py#L46

              Just a snippet:

               try:
                      # Validate lines parameter (accepts floats from LLMs)
                      lines, _ = validate_line_count(lines, default=100)

                      cmd = ["journalctl", "-n", str(lines), "--no-pager"]

                      if unit:
                          cmd.extend(["-u", unit])

                      if priority:
                          cmd.extend(["-p", priority])

                      if since:
                          cmd.extend(["--since", since])

                      returncode, stdout, stderr = await execute_command(cmd, host=host)

              You can see just from this snippet how it’s constructing the actual command run. There isn’t any ability to pass an open command here; the LLM isn’t really outputing commands to run, rather, it’s interpreting the user’s input, reasoning to decide what troubleshooting steps to take, then asking the tool to run things. It understands what the tool allows, and it it tries to break thru that, the mcp server won’t execute it and the LLM will need to reconstruct its request to fit within the parameters of the tool.

              • Alex

                Why are you pasting an LLM’s replies here? If people wanted those, they could ask it themselves. They’re trying to talk to you, a human being, assuming you still remember how to formulate your own comments.

              • Hey Alex! The blog post isn’t meant to be a blog post to walk people through how to fix, erm, my network card or upgrade my laptop to F43. More, it’s meant to demonstrate the technology we’ve built with linux-mcp-server, give some practical examples of how it can be used to troubleshoot or help manage Fedora systems, and demonstrate what kind of output you can get out of locally-hosted, open source-licensed models using open source software. If we wanted to write an article about how to manually debug and fix ath12k drivers or how to clean up your home directory, I think Brian and I would have written a very different article!

              • Alex

                LOL. LMAO, even. I asked you about why you were using an LLM to reply to the comments on this post, and your little autocomplete had no idea what I was talking about. Stunning, really. Why are we supposed to trust these things again?

              • I’m not using an LLM in any of the comments I’ve made. I’m just a really fast writer. Ask anybody who works at Red Hat – I live transcribe the company meetings and have for years.

          • Beth

            Fedora’s sudo configuration uses a timestamp, so once a user runs sudo successfully, privilege elevation persists for a period of time. That means an unnecessary or invalid elevated command doesn’t just waste a step; it creates a window where any subsequent command a beginner tries will execute with root privileges without prompting them again. Publishing an article that includes an incorrect privilege-elevated command without calling it out inline is therefore remarkably irresponsible. For a source positioned as trustworthy guidance, presenting an erroneous privileged command exposes users to avoidable risk and models unsafe practices.

            You could have redacted the command. You could have added a leading sentence calling it out as incorrect. Instead you took more time to justify it, and in the process showed that why it was problematic seems to have escaped you. You jumped to change the title of the article — but have pushed back repeatedly when confronted with the fact that your content may put a user’s system at risk? Not a good look for a “Distinguished Engineer”.

            • Hi Beth! This is a very reasonable point. I do call out each block of LLM output as LLM outputted in the text. I had hoped to format it using the style of the website here via CSS classes etc. so it had more differentiation with the rest of the article but I struggled a bit and gave up (I’m not a real big fan of the WP block editor :sad:) But, I think you bring up a really important point so I have made some light edits to the article to:

              1. Make more visually clear which parts of the article are LLM-output and which are not;
              2. Post a warning that the LLM output is in fact LLM output and should be examined and evaluated before begin acted upon.

              I did not want to edit or one might say “doctor” the LLM output, because I’m trying to demonstrate what these things are capable of. In the call to action at the bottom I did call out that there’s a lot of room for improvements and optimization. I’ve pointed out very specific ones in other comments. Fedora is a software community of contributors I’ve been a part of since 2003 – it’s nice to talk through polished complete work, but this is very much something under active heavy development, and we’re working on it within the Fedora community to make it better. So I think it’s okay that it’s not a perfect solution. That means it’s an opportunity for collaboration to build something better that works for us as a community. I really want to see open source contributors who understand the importance of user agency over technology get more involved in AI – it’s not going away, and we need to make sure those of us who understand what makes open source so important have a seat at the table to shape the technology for the better before it’s too late!

              Thanks for the feedback!

      • jmiven

        Hello Máirín,

        If I had the space to make the post longer (it’s already quite long 🙂 ) I could show how to push back on it and tell it apt isn’t a Fedora tool.

        The problem I see with that is that you have to know that apt isn’t a Fedora tool in the first place. It’s obvious for a lot of us, but for beginners looking to “talk with [their] Fedora system for understanding [their] system and getting help troubleshooting it”? In my experience, it is not always.

        If you’re looking for help troubleshooting a problem, you probably can’t troubleshoot the LLM’s answer at the same time.

      • cholling

        So, you could ask a glorified auto-complete, and when it tells you something you know is wrong, you push back. Keep going until it tells you something you don’t know is wrong. Is it right? Or is it just wrong in ways that exceed your expertise? Who knows?

        Or, you could just learn how to use command line tools and interpret the output.

        LLMs are a broken solution desperately in search of a problem. They don’t belong in Fedora.

        • Hi cholling, the overarching goal for many open source and free software projects (Fedora is no exception) is to make open source technology more accessible and usable by as broad a contributor base as possible, because we want as many people as possible to be able to benefit from the benefits free and open source software provides them. Giving users agency over technology that impacts their lives is critically important. I grew up when proprietary operating systems were really the only viable option for personal computing, Windows was the main one, and I lost a lot of homework due to blue screens of death without any ability to use something better or to fix whatever caused the issues.

          Part of making free and open source software more accessible is making it easier to use. Higher level programming languages and command-line tools are many layers of abstraction over the foundations of computing. There are people who need to write assembly, writing assembly isn’t fun (I did as part of my computer science degree), and it is much more comfortable to program computers in higher abstraction layer languages like C or Python or interacting with them via command line tools, or writing your own shell scripts. That is far too difficult and inaccessible to a large segment of the population who could benefit from free and open source software. They deserve the benefits of software freedom the same as those of us who are comfortable with the command line.

          LLMs as a user input processing mechanism are, I believe, simply a new and higher layer of abstraction. At the end of the day there’s an interface between humans and computers, and as we’ve built layers of abstraction on top of the base, we’ve progressively lightened the burden we place upon humans to learn and understand and be able to speak to the computer in ways the computer understands natively. LLMs are a layer that suddenly ease a lot of burden off of human operators and result in the computer instead learning how to better understand native human speech. I don’t think that’s a bad thing. I do not like the requirement of having to learn a ton of obscure commands and tools in order to be able to benefit from technology – it becomes a gatekeeping mechanism.

          I talked about this at length during the process of when we redesigned Anaconda in the Fedora Community back in 2010/2011… eg.

          https://blog.linuxgrrl.com/2010/10/01/you-must-be-this-tall-to-ride-__/

          This is a similar topic, and well before the explosion of LLMs / etc. LLMs solve a very good problem – parsing and reasoning through unstructured user input and structuring and formatting it into language the computers can understand. In the same way, ML techniques are making it more possible for humans who speak different languages to understand one another.

          Obviously the technology is still early days and not perfect. We’re in a great position as open source practitioners to influence it to be better the open source way.

  3. Boricua

    I installed Goose, but at launch it fails with an error about lacking libssl.so.1.1.

  4. Bob

    Thanks for this article. I can see that this new tool will be really useful. Normally I end up with 34 tabs open on StackExchange and/or similar trying to weld together something that makes sense. Now I can get all of those suggestions and more in one handy space to try to find something that might help. I might even be able to save disk space by deleting the MAN Pages…. just kidding. Now Linux has its own CoPilot I can finally break the Widows Chains. By the way. It’s Generative AI.

  5. No Thanks

    If you’re going to use ChatGPT (gpt-oss) to write the article, I’m not going to bother reading it. Shame on you.

    • Hi No Thanks!

      gpt-oss isn’t ChatGPT. It is an open weight, open source licensed model that was trained by OpenAI, yes. And no, they’re not personally my favorite company. I chose that model because of a number of factors:

      • Open source license, open weights
      • Trained with tool calling ability, which is important for MCP server usage
      • Enough of a pretrained base knowledge on Linux to be useful in reasoning (although as you can see from the output, perhaps a bit Debian-ecosystem heavy on the pretraining data.)
      • On-prem.

      Larger hosted models like the Claude family of models or Gemini are faster and provide better results in my experience, but, the data on your personal system is sensitive, and that’s not something I feel comfortable sending to hosted cloud providers. Using open source licensed models that are small enough to run locally or within your home network but also capable enough to actually call tools and reason through which tools to call are a sweet spot in terms of keeping your data private but allowing you to try out this new technology.

      But no, not my favorite company.

  6. Antony

    Hello, hope everyone is well. Please allow me to describe that i am quite new to Linux (like October 2025 new).

    After a lot of trials with different OS’s i chose Fedora 42 server (headless).

    My aim was to slowly remove myself from windows and everything microsoft (Onedrive, Windows etc). I had used windows server in my 20’s for a while so i have a little knowledge but Linux is alien to me (not so much now). I now have 3 systems

    Desktop – Fedora 43 KDE Plasma
    Laptop – Fedora 43 KDE Plasma
    Server – Fedora 43 Server headless (updated from 42 yesterday)

    Everything works as it should, i have Nextcloud for my storage, Cloudflare is setup on zerotrust so no ports open and everything is tunneled.

    I had started to setup n8n (i know this may seem a small thing for some) but it says hello and can tell the time. So stumbling across this post has got me a little interested as im finding n8n a little difficult and having an AI agent is something i really want to get involved with.

    I have been using co-pilot (i know i know i want away form microsoft but its been super helpful in helping me set everything up and as i still have an active subscription till September 2026 i shall keep using it.) and im learning.

    The amount of obsidian notes i have is ridiculous but super helpful to keep going back over and keep learning.

    I’m hoping linux-mcp-server is something i can integrate into my server with the mentioned LLM Goose with some time and help who knows.

    • Hi Antony! Thanks for sharing your setup – going open source and using Linux as your daily driver is a great journey and I hope you’re having a fun time so far.

      What might be interesting for you to try is building a RAG DB out of your obsidian notes! One of my teammates Ellis Low has been working on a project called docs2db that would allow you to do this. You could then hook up that RAG you build to goose and could potentially use it alongside linux-mcp-server. (Full disclosure, I haven’t tried this yet myself – and I am an obsidian user so I want to try now 🙂 )

      Here’s the docs2db repo if you want to try it out:
      https://github.com/rhel-lightspeed/docs2db

      We hang out in the Fedora AI/ML channel on matrix and we’d love to hear from you if you try it out.

  7. Nikos

    Looks like the linux-mcp-server fedora package could use RAG of the fedora docs to help ground it in Fedora land.

  8. Kostas Apostolopoulos

    If this project evolves further and can do write operations as well, system administration will change radically, if not become obsolete.
    I can imagine a scenario where you could have it check the logs from a fleet of IoT devices every ten minutes and read a template with actions to take, or even have it run Ansible playbooks if logs report this or that.

    • I’d hate for it to become obsolete, and I think the human skillset will always be needed, but I hope that these kinds of tools can reduce toil and make the data systems collect more visible and consumable to inform system administrator decisions and plans. Your proposed scenario should be quite possible!

  9. Penny

    This sounds marvelously handy, thanks for the writeup. Very excited to see performance if/when RAG on Fedora documentation is incorporated.

  10. serafean

    This is my “someone is wrong on the Internet” moment… And only for the wireless “likely root causes” part.

    ath12k_pci_disable_fast_rx doesn’t seem to exist as a module parameter. An internet search points only back to here. grepping kernel 6.17.11 sources doesn’t produce anything either.
    It suggests to use apt…
    What module does the pci_bus_reset parameter belong to? I guess it wants me to reset the pci bus, that’s echo 1 > /sys/ … /reset
    The spaces around “=” will make copy/pasting into modprobe.conf unworkable.
    ath12k_pci_disable_ps… does it even exist as a parameter? grepping sources gives nothing. And google points back here…
    I’m not even sure ath12k_pci exists as a separate module. yes, ath12k has a pci.c, which namespaces its logs. Per found lspci output, it uses the ath12k module, which contains ath12k_pci driver.

    There doesn’t seem to be almost anything correct about the suggested fixes. This is worse than a bad quality forum post.
    Prove me wrong… I don’t have an ath12k to check what actually happens, bud modinfo is stubbornly silent about those parameters.
    Point to where the parameters are registered using the module_param_* macro. I will happily concede to an out of tree driver.

    • Fair points! It was enough for me, though, that it found the problem. I wasn’t sure if the issue was with the router or the laptop, because it only happens in my office and nowhere else. It found the errors shown in dmesg and gave me some food for thought on where to look. To be honest, not that much different from my experience using online forums to debug issues, except, it saved me a lot of typing and effort.

Comments are Closed

The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat. Fedora Magazine aspires to publish all content under a Creative Commons license but may not be able to do so in all cases. You are responsible for ensuring that you have the necessary permission to reuse any work on this site. The Fedora logo is a trademark of Red Hat, Inc. Terms and Conditions