Learning about MCP servers and AI-assisted GitHub issue triage

Today is another Red Hat day of learning. I’ve been hearing about MCP (Model Context Protocol) servers for a while now – the idea of giving AI assistants standardized “eyes and arms” to interact with external tools and data sources. I tried it out, starting with a toy example and then moving on to something actually useful for my day job.

First steps: Querying photo EXIF data

I started with a local MCP server to query my photo library. I have thousands of photos with EXIF metadata and manually gardened tags, so this seemed like a good test case for structured data queries.

After installing exif-mcp and fixing the build, it connected nicely to the Claude CLI:

/mcp
❯ 1. exif-mcp            ✔ connected · Enter to view details

Time to test it:

get the EXIF metadata for all files in '~/Fotos/Genf Okt 2025' and list
all photos with tags 'martin-pitt' and 'annett-pitt'

Perfect! I found the photos with those tags:

  • Photos with both 'martin-pitt' AND 'annett-pitt' tags:
    20251022_134100.jpg (also tagged: cern, denkmal, genf, museum)
  • Photos with only 'annett-pitt' tag:
    20251022_161250.jpg (also tagged: cern, genf, museum)
  • Photos with only 'martin-pitt' tag:
    None found

Both tagged photos were taken at CERN in Geneva on October 22, 2025.

This worked, but it took about a minute – the whole process of finding the files and querying each one individually is not particularly fast. Let’s try something more interesting:

using the gps coordinates, find all photos which were taken in Montreux

I found 34 photos taken in Montreux, Switzerland on October 23, 2025, based on their GPS coordinates:

  • Main cluster (Montreux center area - 27 photos):
    Around coordinates 46.429-46.436°N, 6.909-6.912°E, taken between 10:49 AM - 12:34 PM
  • Secondary cluster (7 photos):
    Around coordinates 46.414-46.416°N, 6.918-6.928°E, taken in the afternoon (12:51 PM - 1:17 PM)

The complete list includes photos like:
- 20251023_124920.jpg through 20251023_151748.jpg

That took an agonizing 6 minutes to process all the photos. But it correctly identified the locations based on GPS coordinates and even clustered them sensibly.

open the files in the secondary cluster in `eog`

I've opened the 7 photos from the secondary cluster (afternoon photos around the eastern part of Montreux) in Eye of GNOME:

  • 20251023_145130.jpg
  • 20251023_145810.jpg
  • 20251023_150650.jpg
  • 20251023_150817.jpg
  • 20251023_150941.jpg
  • 20251023_151145.jpg
  • 20251023_151748.jpg

Nice! I mean, that’s something I can do in photo gallery apps directly, but it demonstrates how MCP servers can give AI models additional capabilities.

GitHub issue gardening

Cockpit has accumulated a large number of GitHub issues over the years. Our small team tries to keep up with triaging them, and every now and then we have an initiative to garden and clean them up. I wanted to see if GitHub’s official MCP server could help with triaging and labeling issues.

Setting it up was straightforward:

claude mcp add --transport http github https://api.githubcopilot.com/mcp \
  -H "Authorization: Bearer $(< ~/.config/github-cockpit-issue-triage-token.txt)"

cockpit-podman duplicates

look at all open cockpit-project/cockpit-podman's issues, and identify
potential duplicates. Do *not* yet change anything

It came back with an extensive analysis, grouping issues into categories like “Podman Service Detection Issues”, “Quadlet Support”, or “Container Filtering/Sorting”:

Potential Duplicate Issues (Open Only)

1. Podman Service Detection Issues (Likely related/duplicates)

  • #2067 - "WebUI shows 'Podman-Service failed'"
  • #890 - "'Podman Service is Not Active' while podman is running normally"
  • #889 - "User service appears to crash, but is still running"

Analysis: All three deal with the podman service appearing to fail or not be detected properly when it's actually running.

▶️ [Full analysis with all 13 groups]

2. Quadlet Support (Related feature requests)

  • #2287 - "Show command and ports for inactive quadlets"
  • #2055 - "quadlet container service integration"
  • #2054 - "quadlet: 'Created' is the same as uptime, as systemd creates the container on the fly"
  • #1236 - "Support quadlet configuration"

Analysis: These are all related to improving quadlet support and could potentially be consolidated into a single tracking issue.

3. Container Filtering/Sorting (Related feature requests)

  • #1098 - "Improve filtering of containers list"
  • #1097 - "Sort containers"

Analysis: Both are about improving how users can filter and organize the container list.

4. Container Console Issues (Possibly related)

  • #1654 - "can't reconnect to console after entering 'exit' in the console (shows 'disconnected' in red font)"
  • #1412 - "Shell commands like ls do not work on the first line of the Console"

Analysis: Both deal with console interaction problems.

5. Container Editing/Modification (Related feature requests)

  • #1293 - "Missing 'Edit' option"
  • #1804 - "Updating an image does not update the container"

Analysis: Both relate to the inability to modify containers after creation.

6. UI Layout/Spacing Issues (Related)

  • #2093 - "Layout and alignment issues"
  • #2081 - "CPU/Memory/Ports and other details not hidden on mobile"
  • #2073 - "Expanding history tab of an image on smaller displays does not properly wrap"
  • #2072 - "Border has no spacing in mobile layout"

Analysis: Multiple issues related to UI layout and responsive design problems.

7. Create Container Dialog Issues (Related)

  • #2071 - "Delete icon misaligned when there is a validation error"
  • #2070 - "Inserting multiple env variables always gives a validation error"
  • #1459 - "Few issues with Container create validation"

Analysis: All related to validation and UI issues in the create container dialog.

8. Network/Port Features (Related feature requests)

  • #2233 - "Pod ports not displayed as ranges"
  • #1682 - "Add 'Both' protocol for TCP & UDP port mapping"
  • #906 - "Better Networking Options"
  • #1538 - "Specific IP address for container"

Analysis: All related to improving networking configuration options.

9. Test Infrastructure Flakes (Related)

  • #2306 - "HealthCheck pixels sometimes don't match"
  • #1724 - "testHealthcheckSystem pixel flake"
  • #1453 - "TestApplication.testHealthcheckSystem flaky pixel test due to list change animation"
  • #1450 - "#containers-filter loses focus in testPods"

Analysis: Multiple test flakiness issues that could be tracked together.

10. Restart/Autostart Functionality (Related feature requests)

  • #2007 - "Container can not be stopped from UI when set to restart 'on-failure'"
  • #1468 - "Feature Request: Add checkbox to start a container or pod automatically with the cockpit service"
  • #1147 - "Support podman auto-update"

Analysis: All related to automatic container restart/start functionality.

11. Volume/Storage Features (Related feature requests)

  • #1900 - "Add option to copy data from image when creating volume"
  • #1640 - "support volumes that aren't bind mounts"

Analysis: Both about improving volume management.

12. Image Display/Management (Related)

  • #1962 - "Show container/image opencontainer label information"
  • #913 - "Nonfiltered search results; container images that aren't available for the architecture are shown"
  • #912 - "Lack of progress indication of container image download"

Analysis: All related to image information display and download.

13. Console/Terminal Enhancement Requests (Related)

  • #1830 - "Allow cockpit's podman console to take more window area"
  • #1829 - "Have a URL that would go right into a container"

Analysis: Both about improving console access.

Summary of High-Confidence Duplicates (Open Issues)

Most likely to be actual duplicates:

  • Podman service detection (#2067, #890, #889) - 3 issues
  • Container list filtering/sorting (#1098, #1097) - 2 issues
  • Mobile/responsive layout issues (#2093, #2081, #2073, #2072) - 4 issues
  • Create container validation (#2071, #2070, #1459) - 3 issues
  • Test flakiness (#2306, #1724, #1453, #1450) - 4 issues

When I actually looked at these, most groups turned out to be not actual duplicates, but related issues. Claude’s confidence was not justified. However, the grouping was genuinely helpful – it let me triage related issues together to build a proper mental model quickly. For instance, I opened the health check pixel flakes group and cleaned them up a bit.

Stale questions

go through all open cockpit-project/cockpit issues labeled 'question',
where the reporter did not respond for at least two weeks after a
developer asked for info

I found 6 open issues labeled 'question' in cockpit-project/cockpit. After analyzing the comment threads, I identified 4 stale issues where a developer asked for information and the original reporter hasn't responded for at least 2 weeks:

Stale Question Issues

  • #22437 - "cockpit-tls gnutls_handshake failed: A TLS fatal alert has been received"
    Reporter: @Barry40
    Developer request: 2025-09-15 (@martinpitt asked for network setup, browser version, JS console messages, journal messages)
    No response from reporter: ~2 months
▶️ [other results]
  • #22390 - "Cache drives"
    Reporter: @johnniek2jlb
    Developer request: 2025-09-01 (@martinpitt asked "What is a 'cache drive'?")
    No response from reporter: >2 months
  • #21658 - "cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received."
    Reporter: @masvild
    Developer request: 2025-02-27 (@martinpitt asked what they were trying to do and what actually failed)
    No response from reporter: >4 months
  • #20452 - "My MOTD is not getting updated for an enabled cockpit.socket."
    Reporter: @supakeen
    Developer request: 2024-05-10 (@martinpitt asked to check /run/cockpit/motd and systemctl status cockpit.socket)
    No response from reporter: >7 months

This wasn’t quite precise enough – there was developer/user dialog after adding the question label. Let me refine the prompt:

go through all open cockpit-project/cockpit issues labeled 'question'.
Look at when the label was added; group them by "no response for at
least two weeks", "reporter responded", and "someone else responded"

Based on when the 'question' label was added to each issue, here's the breakdown of all 6 open issues:

Group 1: No Response for at Least Two Weeks (4 issues)

  • #22437 - "cockpit-tls gnutls_handshake failed: A TLS fatal alert has been received"
    Label added: 2025-09-15 by @martinpitt
    Reporter: @Barry40
    Status: No response from reporter for ~2 months
    Others who responded: @atgmtg, @roman-kvasnikov, @connelhooley
▶️ [other results]
  • #22390 - "Cache drives"
    Label added: 2025-09-01 by @martinpitt
    Reporter: @johnniek2jlb
    Status: No response from reporter for >2 months
    Others who responded: @jelly (developer), @raynet
  • #21658 - "cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received."
    Label added: 2025-02-27 by @martinpitt
    Reporter: @masvild
    Status: No response from reporter for >4 months
    Others who responded: @gloryandel, @martinpitt (developer), @Superuser18
  • #20452 - "My MOTD is not getting updated for an enabled cockpit.socket."
    Label added: 2024-05-10 by @martinpitt
    Reporter: @supakeen
    Status: No response from reporter for >7 months
    Others who responded: @urbenlegend, @dmoerner

Group 2: Reporter Responded (2 issues)

  • #21098 - "cockpit crashing at random"
    Label added: 2025-01-08 by @martinpitt
    Reporter: @gen-angry
    Status: Reporter responded same day (2025-01-08)
  • #21032 - "Client: flatpak-spawn: Invalid byte sequence in conversion input"
    Label added: 2024-09-23 by @martinpitt
    Reporter: @q7nm
    Status: Reporter responded same day (2024-09-23)

Group 3: Someone Else Responded

All 6 issues have had responses from people other than the reporter. The key distinction is whether the original reporter came back to provide the requested information.

That was better. However, with only 6 issues total, I could have done this manually faster. Let’s try something I can’t easily do by hand.

Bulk labeling issues

find all cockpit issues which relate to storage/udisks bugs/requests and don't
already have a "page:storage" label; show their titles to me for confirmation

This failed with “invalid Message: More than five AND / OR / NOT operators were used”, then Claude tried to query these individually, which failed with “Prompt is too long”. I didn’t get out of this state, so I restarted Claude and tried a different approach:

get a list of all open cockpit-project/cockpit GitHub issues with their
titles and labels

That worked, with a warning: “⚠ Large MCP response (~55.0k tokens), this can fill up context quickly”. Now that it had all the data in its working memory:

list all issue ID+titles that are storage/udisks/file system related,
and don't already have a "page:storage" label

Based on the retrieved issues, here are the storage/udisks/file system related issues that don't already have a "page:storage" label:

  • #22488 - SMART Infos Not Displayed on Ubuntu 24.04
▶️ [other results]
  • #22382 - Drives power settings section with hdparm/hd-idle cockpit-storaged
  • #22339 - Disk Cloning
  • #22306 - Changing mount point on one partition, changes it on a completely other device.
  • #22292 - Can't delete Btrfs filesystems set up with RAID (device busy)
  • #21930 - storage: Creating btrfs subvolumes doesn't check for duplicate mount points
  • #21751 - Stratis 3.8.0
  • #21708 - Failed to add new partition to the table: Numerical result out of range
  • #21700 - Cockpit storaged leaves RAID disks in unknown state
  • #21550 - Add timestamp to btrfs storage snapshot overview and detail
  • #21370 - storage: Multiple distinct devices merged into "Multipathed devices"
  • #21350 - fsreplace1 support mode as attribute (file operations related)

These issues are clearly related to storage functionality (SMART data, disk drives, RAID, Btrfs, partitions, Stratis, etc.) but are missing the page:storage label that would help categorize them properly.

That looks reasonable, except for #21350 – but that’s due to my fuzzy prompt.

add page:storage label to all of them except #21350

That worked! The operation so far cost $1.31, mostly for 9.5K claude-sonnet tokens and a bit for 1.4K claude-3-5-haiku tokens. That’s roughly equivalent to 1 minute of a human developer’s work, so quite worth it.

repeat that exercise for the "page:networking" label

This was much faster, as it already had the bug list in its brain. I confirmed and it worked. I repeated this for software updates/apt/dnf related issues with “page:update” and “page:networking” as well. For the record, I’m now at $2.17 total cost.

Duplicates in the main cockpit repo

based only on the already retrieved info in our working memory, identify
possible duplicates; fetch details for the candidates if necessary

Based on the detailed examination, here are the confirmed and potential duplicate issues:

1. cockpit-tls CPU usage issues:

  • #22274 - "cockpit-tls sporadically uses 100% CPU until getting force closed" (Fedora 42, has extensive logs)
  • #21710 - Listed but returns 404 (may have been closed/deleted)

2. TLS handshake failures:

  • #21658 - "cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received"
  • Multiple reports mention similar gnutls_handshake errors in logs (but #21658 is the only dedicated issue)

3. Login/Authentication issues:

  • #22095 - "Switching to Admin with SELinux context sysadm_u"
  • #21854 - Same title: "Switching to Admin with SELinux context sysadm_u"

Let me verify the SELinux issues:
Looking at the list, #22095 and #21854 have the exact same title and both are labeled as bugs. These are likely duplicates.

4. SSH keys disappearing:

  • #22045 - "SSH keys disappear from list after some time not connected to cockpit web ui"
  • This appears unique

Based on the data available, the most likely actual duplicate is:

  • #22095 and #21854 (same exact title about SELinux sysadm_u)

Would you like me to fetch details on these two to confirm they're duplicates before taking action?

There was some confusion here. While it got #22095 correct, #21854 is completely different. After I said “yes” to fetch details, it actually realized that and apologized. I ran into similar cases in the “page:storage” labeling above as well.

Final conclusion from Claude:

Based on the retrieved data, I found NO confirmed duplicate issues. While some issues share similar symptoms (like cockpit-tls CPU usage or TLS handshake errors), they appear to be different manifestations or contexts of potentially related problems rather than exact duplicates.

OK, so we’re not actually doing too bad a job at gardening our issue tracker – in fact, I spent almost two weeks in August/September doing exactly that.

Conclusion

I learned what MCP servers do in practice – they’re essentially an alternative API that allows fuzzy, natural language queries against structured data sources like GitHub. For bulk labeling and finding issues by topic, it worked quite well and saved real time. The cost was reasonable for the value delivered.

However, its working memory and attention span are still fairly limited and sometimes buggy – just like our puny human wetware 🧠. It was less helpful at identifying stale or duplicate issues.

For tasks where I need high precision (like identifying actual duplicates), I’m still better off doing it manually. But for tasks where “good enough” categorization helps (like adding labels or grouping related issues), this GitHub AI API is useful.