Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lorenzejay/byoa #776

Merged
merged 46 commits into from
Jun 27, 2024
Merged

Lorenzejay/byoa #776

merged 46 commits into from
Jun 27, 2024

Conversation

lorenzejay
Copy link
Collaborator

@lorenzejay lorenzejay commented Jun 16, 2024

this ticket aims to bring your own AI agent whether its a custom langchain one or one like llamaindex and still work with crew or any custom agent.

  • BaseAgent for defining agents that work together adhearing to the crew flow
  • third-party-agent ecosystem for bringing in prebuilt agent from others leveraging the BaseAgent as an abstraction to adhere to the crew flow.

@lorenzejay lorenzejay requested a review from gvieira June 17, 2024 18:45
Copy link
Collaborator

@gvieira gvieira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to go through the review, but I think we can connect to discuss the idea of a single Agent interface that would prevent us from doing if agent is this and if agent is that.

src/crewai/agent.py Outdated Show resolved Hide resolved
src/crewai/crew.py Outdated Show resolved Hide resolved
src/crewai/crew.py Outdated Show resolved Hide resolved
src/crewai/agents/Third_Party_Agents/llama_index/agent.py Outdated Show resolved Hide resolved
src/crewai/agents/new_wrapper_agent.py Outdated Show resolved Hide resolved
src/crewai/crew.py Outdated Show resolved Hide resolved
src/crewai/crew.py Outdated Show resolved Hide resolved
@lorenzejay lorenzejay requested a review from gvieira June 21, 2024 18:37
src/crewai/agent.py Show resolved Hide resolved
src/crewai/agent.py Show resolved Hide resolved
src/crewai/agent.py Show resolved Hide resolved
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we aren't going to make gpt-4o the default llm for everything. We need to add a model_validator that checks to make sure the agent has an LLM and isn't none.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. Other agents may not be supporting 4o yet. but I do think a validator for llm not being none is best.

tools_list = []
try:
# tentatively try to import from crewai_tools import BaseTool as CrewAITool
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lorenzejay , drop this comment.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note to me: from crewai_tools import BaseTool as CrewAITool

src/crewai/agent.py Show resolved Hide resolved
if not self._rpm_controller:
self._rpm_controller = rpm_controller
self.create_agent_executor()
def format_log_to_str(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lorenzejay should this be a ABC in BaseAgent?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done abstractmethod set


return copied_agent
def get_output_converter(self, llm, text, model, instructions):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be a ABC in BaseAgent as well?

src/crewai/agents/executor.py Show resolved Hide resolved
class CrewAgentExecutorMixin:
def _should_force_answer(self) -> bool:
return (
self.iterations == self.force_answer_max_iterations
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None of these attributes exist on this class. You need to move these attributes over from executor.py and delete them over there.


def _create_short_term_memory(self, output) -> None:
if (
self.crew
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to move over crew to this class instead of executor.py.


def _create_long_term_memory(self, output) -> None:
if self.crew and self.crew.memory:
ltm_agent = TaskEvaluator(self.crew_agent)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to move over crew_agent from executor.py.

coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, question, context)

def _execute(self, agent_role: Optional[str], task: str, context: str):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Screenshot 2024-06-26 at 11 59 52 PM

We need to add back in the clean up that's in this image to support weaker models.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

main does not have these changes. think we can this later ?

src/crewai/crew.py Show resolved Hide resolved
src/crewai/crew.py Outdated Show resolved Hide resolved
@@ -435,17 +446,23 @@ def _run_hierarchical_process(self) -> Union[str, Dict[str, Any]]:
)

self._logger.log("debug", f"[{manager.role}] Task output: {task_output}")

if hasattr(task.agent, "_token_process"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WHy do we have to do this check in hierarchical but not sequential?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

attachment of an agent and task not required will give NoneTypes

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

heirarchial does not need an agent in task

src/crewai/crew.py Outdated Show resolved Hide resolved
src/crewai/task.py Outdated Show resolved Hide resolved
src/crewai/task.py Outdated Show resolved Hide resolved
@@ -302,14 +302,13 @@ def _export_output(self, result: str) -> Any:
pass

# type: ignore # Item "None" of "Agent | None" has no attribute "function_calling_llm"
llm = self.agent.function_calling_llm or self.agent.llm

llm = getattr(self.agent, "function_calling_llm", None) or self.agent.llm
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To prevent us from making these checks. Maybe we could update the base_agent to have a function_calling_llm but it would be set to None.

src/crewai/agents/agent_builder/base_agent.py Show resolved Hide resolved
src/crewai/tools/agent_tools.py Show resolved Hide resolved
@joaomdmoura joaomdmoura merged commit 10997dd into main Jun 27, 2024
1 of 3 checks passed
@joaomdmoura joaomdmoura deleted the lorenzejay/byoa branch June 27, 2024 17:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants