# What's New in Vector Vein #8
Hello friends! Half a year has passed since the last update report. This time, we've accumulated a wealth of updates to share with everyone. Besides the regular node and model updates, there's also a series of important feature improvements and additions! Let's dive into the details!
# 🤯 Workflow Design Agent
Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime. Now, AI can help you design workflows!
Still struggling with how to combine nodes and implement complex processes? Now, you can directly describe your requirements to the AI and let it automatically design the workflow for you!
Click on [Workflow Design] in the left menu on the workflow homepage to get started.
Just input your goal, choose a large language model you trust, and the AI Agent will start multiple rounds of thinking (Cycles), progressively calling tools (Tool Calls) to analyze requirements, find available nodes, and even consult you (Ask User) to confirm details, ultimately generating a complete and usable workflow.
During the design process, you can view the AI's reasoning process, the details of the tools called, and their corresponding responses at any time. You can even provide guidance if the AI encounters difficulties.
Currently, this feature is still in the early testing phase. We recommend using more capable models like gemini-2.5-pro or claude-3.7-sonnet for better design results.
👉 Try the Workflow Design Agent here
# 🤝 Integrate Any Workflow as Your Own MCP Server
Break Down Application Barriers and Let Your Workflows Shine Everywhere!
We now officially support the MCP (Model Context Protocol) protocol! You can publish any VectorVein workflow you create as an MCP server with just one click.
What does this mean?
It means you can seamlessly integrate the powerful workflow capabilities of VectorVein into various applications that support the MCP protocol, such as Claude Desktop, VS Code, Cursor, and even other AI Agent platforms!
Imagine directly calling your meticulously designed code analysis and generation workflows within your IDE, or using your customized information retrieval and report generation workflows directly in Claude Desktop. MCP makes it all possible!
Simply enter the JSON configuration shown in the image above on the settings page of your client (e.g., Claude Desktop/VS Code/Cursor) to connect.
👉 Click here to create your workflow MCP server
# 🔑 Custom API Keys: Use Large Language Models at a Lower Cost
Say Goodbye to Platform Credit Limits, Use Your Own Keys, Run Your Own Models!
Now, you can configure your own third-party large language model API keys under "My Account" - "Developer API Keys"!
Currently supports configurations for various API endpoints, including OpenAI, Anthropic (supports Vertex and Bedrock), and major domestic providers (Alibaba Qwen, Zhipu AI, Moonshot AI, 01.AI, Baichuan AI, MiniMax, StepFun).
Once configured, you can choose to prioritize using your own key in the corresponding model nodes or Agent chats. This allows you to:
- Zero Credit Consumption: Model calls using your custom key will no longer consume VectorVein platform credits.
- Bypass Limits: Directly use the quotas or concurrency limits you purchased on third-party platforms.
- Greater Flexibility: You can also integrate and use models not yet natively supported by the platform through custom configurations.
👉 Click here to view the Custom API Key help documentation
# 🚀 New Text-to-Image AI Model
- 【GPT Image】: OpenAI's latest generation image generation model, supports image editing, with very high quality.
gpt-image-1 demo
# 🎬 Audio Processing Now Live!
# Audio Processing Suite
- 【Minimax Music Generation】: Generate new music based on reference audio and lyrics.
- 【Voice Cloning】 & 【Sound Effect Generation】 & 【Audio Editing】: Comprehensive audio creation and editing capabilities.
👉 View Full Audio Generation Documentation
# 📈 Model Updates
# New Models Added
- OpenAI: o1-preview, o3-mini, o3-mini-high, o4-mini, o4-mini-high, gpt-4.1
- Anthropic: claude-3-7-sonnet, claude-3-7-sonnet-thinking (Supports chain-of-thought output)
- Tongyi Qianwen: Qwen3 series
- Zhipu AI: glm-z1 series
- Gemini: gemini-2.5 series
- X.AI: grok-3 series
- New Multimodal Model: Moonshot Vision
# ✨ Other Feature Optimizations
- VApp API: Added VApp access key management API for easier developer integration.
- VApp Dynamic Links: Dynamically generate time-limited VApp access links via API without exposing the Access Key.
# 💡 By the way, let me introduce my new project: Codexy
Recently, I used AI to refactor OpenAI's Codex CLI (a terminal AI assistant written in TypeScript) into a Python version and named it Codexy.
Codexy Highlights:
- AI Programming Partner in Your Terminal: Interact directly via chat in your Python terminal to let the AI help you understand code, modify files, and even execute commands.
- Native Python, Textual Powered: Built with the powerful
Textuallibrary for a beautiful and efficient TUI interface, offering an interaction experience far superior to traditional command lines. - Powerful Tool Calling: More than just chat, it can call tools for file reading/writing, code modification (
diff/patch), command execution, and more, truly getting work done. - Controllable Automation: Offers multiple approval modes, allowing you to decide the AI's level of autonomy, balancing efficiency and safety.
- Lightweight and Easy to Use: Pure Python implementation, easy to deploy locally or on a server.
The entire refactoring process heavily utilized the capabilities of Gemini 1.5 Pro, allowing me to once again experience the joy of "human-AI pair programming".
Interested friends can check out the detailed introductory article: Codexy: Bring the AI Coding Assistant into Your Python Terminal
Project Address: https://github.com/andersonby/codexy (Stars and contributions are welcome!)
Thank you for your support! Interested friends can join the group for discussions by adding my WeChat Work QR code:
That's all for this update, see you next time!