I experienced the Claude API latency issue too and the better use case there is to move to Google Gemini Flash, its significantly faster and quality of responses is still very good. Here's some sample code:
from google import genai
from google.genai import types
# Create async client
client = genai.Client(
api_key=settings.GEMINI_API_KEY,
http_options={"api_version": "v1alpha"},
)
model = "gemini-2.0-flash-exp"
# Construct prompt
prompt_text = " "
gemini_prompt = {
"parts": [{"text": prompt_text}]
}
# Set up generation config to request both text and image output
"Professor" used to be a high prestige profession because they were experts in their field and a fount of knowledge. Today they're losing prestige because AI is more expert than they are.
But a major reason "education" works is because it transfers prestige from professor to student. The more prestigious the professor, the more the student gets out of the relationship. While AI can convey the knowledge, it can't impart prestige. Thus students will pay much less attention to AI than to a prestigious professor.
So "professor" is dying out as a profession because there no longer much prestige attached to the job. But if education means anything, it means a prestige transfer, which has to come from a human. I don't know what job description those humans will have, but they will be key to education.
AI can steal prestige from professors, but it can't award prestige to students. So I think student interest in AI seminars will not be very high.
In the project directory with Claude code did you initiate a Claude.md file? This is a text file that Claude writes to describe the project. Think of it as an onboarding document for a new engineer to the project, which is what Claude is at the beginning of every new session. I have Claude update the Claude.md file at the end of every session. It can get to be an extensive document, but very useful. It can also give you an understanding of Claude’s understanding of the project.
I would trust your instincts on this and explore as far as you can take it. You can fine-tune your objectives as you discover the capabilities and shortcomings of what you can do with LLMs.
Sounds like a great start! But are you really expecting 1000 days before getting to v 1.0?
.01 or even .1 is sufficiently humble.
The banter file might be interesting, but the whole alt-ai student peers leaves me … uncertain but suspecting it can be improved. For instance maybe TAs rather than students, who comment at the professor’s answer’s key point from another POV, which might be more optimal for some students. Personality and/or IQ/SAT wise.
I'd be interested in being a beta participant. Working on a tangential project and would love to observe the process. Happy to give feedback. Also might already have an idea for you.
I experienced the Claude API latency issue too and the better use case there is to move to Google Gemini Flash, its significantly faster and quality of responses is still very good. Here's some sample code:
from google import genai
from google.genai import types
# Create async client
client = genai.Client(
api_key=settings.GEMINI_API_KEY,
http_options={"api_version": "v1alpha"},
)
model = "gemini-2.0-flash-exp"
# Construct prompt
prompt_text = " "
gemini_prompt = {
"parts": [{"text": prompt_text}]
}
# Set up generation config to request both text and image output
generation_config = types.GenerateContentConfig(
response_modalities=["Text", "Image"],
temperature=0.7,
max_output_tokens=2048
)
# Use the async version of generate_content
response = await client.aio.models.generate_content(
model=model,
contents=gemini_prompt,
config=generation_config
)
"Professor" used to be a high prestige profession because they were experts in their field and a fount of knowledge. Today they're losing prestige because AI is more expert than they are.
But a major reason "education" works is because it transfers prestige from professor to student. The more prestigious the professor, the more the student gets out of the relationship. While AI can convey the knowledge, it can't impart prestige. Thus students will pay much less attention to AI than to a prestigious professor.
So "professor" is dying out as a profession because there no longer much prestige attached to the job. But if education means anything, it means a prestige transfer, which has to come from a human. I don't know what job description those humans will have, but they will be key to education.
AI can steal prestige from professors, but it can't award prestige to students. So I think student interest in AI seminars will not be very high.
In the project directory with Claude code did you initiate a Claude.md file? This is a text file that Claude writes to describe the project. Think of it as an onboarding document for a new engineer to the project, which is what Claude is at the beginning of every new session. I have Claude update the Claude.md file at the end of every session. It can get to be an extensive document, but very useful. It can also give you an understanding of Claude’s understanding of the project.
I would trust your instincts on this and explore as far as you can take it. You can fine-tune your objectives as you discover the capabilities and shortcomings of what you can do with LLMs.
Sounds like a great start! But are you really expecting 1000 days before getting to v 1.0?
.01 or even .1 is sufficiently humble.
The banter file might be interesting, but the whole alt-ai student peers leaves me … uncertain but suspecting it can be improved. For instance maybe TAs rather than students, who comment at the professor’s answer’s key point from another POV, which might be more optimal for some students. Personality and/or IQ/SAT wise.
I would be interested in trying it out.
Is there a link to this v0.001 deployment for us to checkout?
I'd be interested in being a beta participant. Working on a tangential project and would love to observe the process. Happy to give feedback. Also might already have an idea for you.
Link to your GitHub repo?