My prior post suggested that librarians are the logical candidates for the emerging role of “Prompt Engineer.” Anthropic which recently released an AI enabled tool called Claude agrees with me and they are searching for a Prompt Engineer/Librarian.

In the job posting, Anthropic describes itself as an “AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our customers and for society as a whole.” I signed up to test Claude which can answer questions, summarize ingested documents and produce formatted outputs.

They openly admit the challenges: “Given that the field of prompt-engineering is arguably less than 2 years old, this position is a bit hard to hire for!  As a result, we ask that you share with us a specific prompt engineering project on LLMs that you’re proud of in your application!  Ideally this project should show off a complex and clever prompting architecture or a systematic evaluation of an LLM’s behavior. “


  • Discover, test, and document best practices for a wide range of tasks relevant to our customers.
  • Build up a library of high quality prompts or prompt chains to accomplish a variety of tasks, with an easy guide to help users search for the one that meets their needs.
  • Build a set of tutorials and interactive tools that teach the art of prompt engineering to our customers.
  • Work with large enterprise customers on their prompting strategies.

You may be a good fit if you:

  • Have 3-5 years of relevant or transferrable experience.
  • Have at least a high level familiarity with the architecture and operation of large language models.
  • Are an excellent communicator, and love teaching technical concepts and creating high quality documentation that helps out others.
  • Are excited to talk to motivated customers and help solve their problems. 
  • Have a creative hacker spirit and love solving puzzles.
  • Have at least basic programming skills and would be comfortable writing small Python programs.
  • Have an organizational mindset and enjoy building teams from the ground up.
  • You think holistically and can proactively identify the needs of an organization.
  • Make ambiguous problems clear and identify core principles that can translate across scenarios.
  • Have a passion for making powerful technology safe and societally beneficial.
  • You anticipate unforeseen risks, model out scenarios, and provide actionable guidance to internal stakeholders.
  • Think creatively about the risks and benefits of new technologies, and think beyond past checklists and playbooks.
  • You stay up-to-date and informed by taking an active interest in emerging research and industry trends.

I especially love this part – you may have imposter syndrome – Apply anyway!

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you’re interested in this work. We think AI systems like the ones we’re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

And the Salary Ain’t Bad Either

The expected salary range for this position is $250k – $375k USD.

See the job listing and apply for the job at this link.