概述

MCP使用快速入门。本文将带您了解 MCP 协议的基本概念,介绍著名的 punkpeye/awesome-mcp-servers 仓库,并演示如何通过 FastMCP 客户端 + LLM的Fuctioncalling调用多个 MCP 服务(查询天气+计算器)。FastMCP官方教程:mcp-client

一、MCP 协议简介

MCP(Model Context Protocol)是一种开放协议,旨在为大型模型(LLM)与本地或远程资源之间提供安全、标准化的交互接口,类似于专为 LLM 设计的 API。
通过 MCP,开发者可以将文件访问、数据库连接、外部 API 调用等能力以“工具”(Tools)和“资源”(Resources)的形式暴露给模型,模型端只需通过协议调用即可完成复杂任务。

二、仓库介绍:punkpeye/awesome-mcp-servers

punkpeye/awesome-mcp-servers 是一个面向生产和实验场景的 MCP Server 集合,涵盖文件系统、数据库、网络请求、OpenAPI 等多种实现方式,为构建 AI Agent 提供了丰富的参考。
该仓库推荐了大量高质量的 MCP Server 实现,其中既有社区驱动项目,也有企业级方案,方便您快速选型和二次开发。

三、MCP 教程:FastMCP 客户端概览

FastMCP 是一款高层次、Pythonic 的框架,用于快速创建和调用 MCP Server,内置运行时管理、协议处理、内容类型与错误控制,让开发者专注于业务逻辑本身。
在 FastMCP 2.0 中,客户端能力得到了大幅增强,支持多种传输方式(Stdio、SSE、内存直连),并能自动生成工具和资源的调用 schema,使 LLM 集成更加简洁。支持了从openapi或者fastapi直接转为mcp-server。或者可以直接使用cursor等客户端连接mcp-server。mcp

安装与初始化

pip install fastmcp
  • 引入并初始化客户端:

    from fastmcp import Client
    async with Client("server.py") as client:
        tools = await client.list_tools()
        print("工具列表:", [t.name for t in tools])
    

核心方法

  • list_tools():列出所有可用工具的名称、描述和参数结构。
  • call_tool(name, params):调用指定工具并获取返回结果。

四、示例代码(以下是MCP结合Fuctioncalling实现多MCP-Server调用的示例)

# server1.py
from fastmcp import FastMCP

mcp = FastMCP("Demo 🚀")

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

@mcp.tool()
def multiply(a: float, b: float) -> float:
    """Multiplies two numbers."""
    return a * b

# Static resource
@mcp.resource("config://version")
def get_version(): 
    return "2.0.1"

# Dynamic resource template
@mcp.resource("users://{user_id}/profile")
def get_profile(user_id: int):
    # Fetch profile for user_id...
    return {"name": f"User {user_id}", "status": "active"}

@mcp.prompt()
def summarize_request(text: str) -> str:
    """Generate a prompt asking for a summary."""
    return f"Please summarize the following text:\n\n{text}"

if __name__ == "__main__":
    mcp.run(transport="sse")
# server2.py
from fastmcp import FastMCP

mcp = FastMCP("Demo 🚀")

@mcp.tool()
def search_weather(place: str, date: str) -> str:
    """查询天气"""
    return f"{date}{place}天气是晴天"


# Static resource
@mcp.resource("config://version")
def get_version(): 
    return "2.0.1"

# Dynamic resource template
@mcp.resource("users://{user_id}/profile")
def get_profile(user_id: int):
    # Fetch profile for user_id...
    return {"name": f"User {user_id}", "status": "active"}

@mcp.prompt()
def summarize_request(text: str) -> str:
    """Generate a prompt asking for a summary."""
    return f"Please summarize the following text:\n\n{text}"

if __name__ == "__main__":
    mcp.run(transport="sse", port=8001)

# client.py

from typing import List, Dict, Any
from openai import OpenAI
import json
from fastmcp import Client
from fastmcp.client.transports import SSETransport
import asyncio

        
class OpenAIFunctionCaller:
    def __init__(self, api_key: str, server_urls: list, openai_api_base: str ):
        self.client = OpenAI(
            api_key=api_key,
            base_url=openai_api_base
        )
        self.server_urls = server_urls
        self.available_tools, self.tool_server_map = asyncio.run(self._get_server_tools())

    async def _get_server_tools(self):
        """从所有服务器获取可用的工具列表,并建立工具名到server的映射"""
        all_tools = []
        tool_server_map = {}
        for url in self.server_urls:
            try:
                client = Client(url)
                async with client:
                    tools = await client.list_tools()
                    print(f"Connected via SSE: {url}, found tools: {tools}")
                    for tool in tools:
                        all_tools.append(tool)
                        tool_server_map[tool.name] = url
            except Exception as e:
                print(f"Error fetching tools from server {url}: {e}")
        return all_tools, tool_server_map

    def _create_function_descriptions(self) -> List[Dict[str, Any]]:
        """将服务器工具转换为OpenAI function calling格式"""
        functions = []
        for tool in self.available_tools:
            function_desc = {
                "name": tool.name,
                "description": tool.description,
                "parameters": tool.inputSchema  # 直接使用工具的 inputSchema
            }
            functions.append(function_desc)
        return functions

    def process_query(self, user_query: str) -> Dict[str, Any]:
        """处理用户查询并调用相应的工具"""
        try:
            # 1. 调用OpenAI进行function calling
            response = self.client.chat.completions.create(
                model="gpt-4o-mini-2024-07-18",  # 或其他支持function calling的模型
                messages=[
                    {"role": "user", "content": user_query}
                ],
                functions=self._create_function_descriptions(),
                function_call="auto"
            )

            # 2. 获取模型选择的函数和参数
            message = response.choices[0].message
            print(message)
            if message.function_call:
                function_name = message.function_call.name
                function_args = json.loads(message.function_call.arguments)

                # 3. 调用服务器端对应的工具
                tool_response = asyncio.run(self._call_server_tool(function_name, function_args))
                return {
                    "success": True,
                    "tool_name": function_name,
                    "arguments": function_args,
                    "result": tool_response
                }
            else:
                return {
                    "success": False,
                    "error": "No function was called by the model"
                }

        except Exception as e:
            return {
                "success": False,
                "error": str(e)
            }

    async def _call_server_tool(self, tool_name: str, arguments: Dict[str, Any]) -> Any:
        """调用服务器端的工具,自动路由到正确的server"""
        server_url = self.tool_server_map.get(tool_name)
        if not server_url:
            raise Exception(f"No server found for tool: {tool_name}")
        try:
            client = Client(server_url)
            async with client:
                result = await client.call_tool(tool_name, arguments)
                return result
        except Exception as e:
            raise Exception(f"Error calling server tool: {e}")

    def answer_with_tools_or_directly(self, user_query: str) -> Dict[str, Any]:
        """
        如果有可用工具,先调用工具,把工具的结果和用户问题一起传给模型,让模型做进一步处理;
        如果没有可用工具,则直接把用户问题传给模型。
        """
        if self.available_tools:
            # 先调用工具
            tool_result = self.process_query(user_query)
            print(tool_result)
            # 构造新的prompt,把工具结果和原始问题一起传给模型
            if tool_result.get("success"):
                result = tool_result.get("result")
                tool_info = None
                if isinstance(result, list) and result:
                    first_item = result[0]
                    if hasattr(first_item, "text"):
                        tool_info = first_item.text
                    elif isinstance(first_item, dict) and "text" in first_item:
                        tool_info = first_item["text"]
                else:
                    tool_info = str(result)
                new_prompt = f"用户问题: {user_query}\n{tool_info}\n请基于工具返回和用户问题,给出最终答案。"
            else:
                # 工具调用失败,直接用原始问题
                new_prompt = f"用户问题: {user_query}\n(工具调用失败: {tool_result.get('error')})\n请直接回答用户问题。"
        else:
            # 没有可用工具,直接用原始问题
            new_prompt = user_query

        # 调用模型生成最终答案
        try:
            print(new_prompt)
            response = self.client.chat.completions.create(
                model="gpt-4o-mini-2024-07-18",
                messages=[{"role": "user", "content": new_prompt}]
            )
            answer = response.choices[0].message.content
            return {"success": True, "answer": answer}
        except Exception as e:
            return {"success": False, "error": str(e)}




# 使用示例
if __name__ == "__main__":
    # 配置参数
    OPENAI_API_KEY = ""
    SERVER_URLS = [
        "http://localhost:8000/sse",
        "http://localhost:8001/sse",
        # 可以添加更多server地址
    ]
    OPENAI_API_BASE = ""  # 自定义的OpenAI API基础URL

    # 创建函数调用器实例
    function_caller = OpenAIFunctionCaller(
        api_key=OPENAI_API_KEY,
        server_urls=SERVER_URLS,
        openai_api_base=OPENAI_API_BASE
    )

    # 测试查询
    user_query = "北京2025年5月7日天气怎么样"
    result = function_caller.answer_with_tools_or_directly(user_query)
    print(result)
    user_query_2 = "1+1=?"
    result2 = function_caller.answer_with_tools_or_directly(user_query_2 )
    print(result2 )


五、小结与展望

本文以 punkpeye/awesome-mcp-servers 仓库为起点,结合 FastMCP 客户端概览,为您搭建了从协议理解到快速上手的完整思路。后续您可以自由扩展工具类型、集成更多数据源,并结合 Function Calling 或自研 Agent,实现更智能的业务流水线。

Logo

欢迎加入 MCP 技术社区!与志同道合者携手前行,一同解锁 MCP 技术的无限可能!

更多推荐