Help Optimizing Julius AI for task Using ChatGPT Conversation History

I’m on the Standard plan and looking for help optimizing Julius AI for a specific use case. I’m experimenting with using my ChatGPT conversation history to support different tasks, starting with crafting job application responses. I know I could use multiple tools to create a custom solution, but I am interested in seeing if this is something Julius may help with more easily. I am not entirely sure if this is a use case suited for Julius, but it would still be fun trying.

Currently, I’m using a ChatGPT conversation export (CSV) to help generate 500-1000 word responses for education-related job applications. Specifically, I need Julius to:

  1. Extract education-related content from my conversation history to help generate responses.
  2. Make meaningful connections from relevant conversations that aren’t explicitly about education but might be useful for context (e.g., discussions on technology trends or social impact).
  3. Generate comprehensive responses and help refine them to fit specific word limits.

I’m looking for any advice or references that could help with:

  • Model selection
  • Custom instructions
  • Workflow optimization
  • Relevant configurations or settings changes within Julius for this task.

The file I am working with is a cleaned and structured CSV of my entire ChatGPT conversation history. I used a Python script with pandas to convert the ChatGPT JSON export file (‘conversations.json’) into a structured CSV, cleaning and flattening the nested conversation structure while preserving all messages, roles, timestamps, and relationships.

I have around five specific essay questions I’m trying to answer, and I see the ChatGPT conversation history as a resource for memory retrieval—helping me find useful ideas or connections that I might have forgotten. I want to make sure I’m approaching the setup effectively without overcomplicating the workflow.

I realize that my focus is on optimizing Julius for a practical use case, and I understand it may require some experimentation. Any guidance on setup, configuration, or best practices to make this workflow as effective as possible would be much appreciated.

Sorry, this is a couple of days late, but how is the process going for you? I’m equally curious to see if this works.
Have you tried implementing a workflow to streamline this process? It seems like you have a solid plan to approach this task. What model are you mainly using to analyze these .csv files?