Issue in Julius (will keep updating)

Hello Team,
Every time julius stopped here and already spend 10 prompts without any result. Any explanation please?

Just got this after waiting 40 mins

Some backgrounds:
The dataset was about NLP and the model accuracy disappointed me so i moved to bert

for last 20 minits it is working but no response

from transformers import BertForSequenceClassification, AdamW
from tqdm.notebook import tqdm

Initialize the BERT model for sequence classification

model = BertForSequenceClassification.from_pretrained(‘bert-base-uncased’, num_labels=1)

Set the device to GPU if available

device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’)
model.to(device)

Initialize the optimizer

optimizer = AdamW(model.parameters(), lr=2e-5)

Set the number of epochs

epochs = 4

Training loop

for epoch in range(epochs):
print(f’Epoch {epoch+1}/{epochs}‘)
total_loss = 0
model.train()
for step, batch in enumerate(tqdm(data_loader, desc=‘Iteration’)):
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
model.zero_grad()
outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
loss = outputs.loss
total_loss += loss.item()
loss.backward()
optimizer.step()
avg_train_loss = total_loss / len(data_loader)
print(f’Average Training Loss: {avg_train_loss:.2f}’)

print(‘Training complete.’)


This is the second issue I found working in julius.ai

I think Julius is unable to handle large dataset

My first post wasn’t replied by team.

Hi Mahmed,

Missed this before. Training a transformer can take a while on CPU. We’re working on figuring out how to provide GPU instances for that sort of thing.



My dataset :

Any solution please ? @Matt

Julius recognizes that it is an unusual value, so something must be throwing it off. I would try clearing your cache and running it again or looking at the code to see if something is throwing it off.

Hi Mahmed!

I was able to run your dataset on SPSS like you, as well as Julius. I got the following:

Julius gave me the same alpha value that SPSS also had. For context, I ran this on R with the psych package. Julius seemed to have also realized it was a weird value which is good.
Like Chris mentioned, did you happen to try clearing the cache and restarting the session? Or checking the code to see if anything unusual happened there? I’m not entirely sure what happened for it to give you such a funky value.

Hope this helps :slight_smile:

@Alysha
Yes now it is working.
The Cronbach’s alpha for the dataset is approximately 0.923, with a 95% confidence interval ranging from 0.900 to 0.943.
This high value of Cronbach’s alpha suggests that the items (questions) in the dataset are highly reliable and consistently measure the same underlying construct. This indicates good internal consistency among the survey questions.

I have one quick question, how one can find if julius is not giving accurate answer? For me, I have spent almost a day to find out the issue in my dataset, but at last i found that it is not my mistake. I was going to collect the responses again, big issue for me, but any suggestion for such kind of issue ?

Hi Mahmed!

I usually know based off of how Julius responds. For example, Julius mentioned that something was unusual with the analysis itself, so that is something that makes me stop and relook at the dataset. When that happens, I’ll usually open a new chat, clear the cache, and then rerun the test and see what the output is.
Starting from scratch helps me see how the process is ran within the chat as well. I would also suggest breaking the analysis into smaller steps so that you can really see how the data is being analyzed. This will help you and Julius keep track of each step as you go and if something arises, you can fix it.

1 Like

Here is another issue I faced please :
I made this [dataset] and(Chi_square_test - Google Sheets)
and asked julius which statistical test is more appropriate here.

Julius did this two things

First reply :


Then I asked julius to perform goodness of fit test and it performed well :

Now when I again told julius to perform Chi-Square Test of Independence which he suggested at first, julius, give this reply which is opposite of his first statement :

So frrst julius suggested that Chi square test of independence is appropriate, then lastly he changed his statement. So any solution to this since it is fair enough to confuse anyone! Details of he conversation :slight_smile: Julius AI | Your AI Data Analyst

1 Like

Yeah that is weird. I decided to try this out myself with your dataset to see what would happen by prompting it to examine your dataset and recommend the correct test. I found that 5/6 times it gave me the correct data analysis, with that one time recommending Chi Square of Independence. However, I did notice that the only time it gave me the incorrect answer was when I got an error code while importing my dataset… so I’m wondering if that may have tripped it up?

I see that it had an issue with your unnamed columns at one point, so I’m wondering if that inadvertently caused some issue in the recommendation? For example, it may have thought that it was another categorical variable and that is why it recommended the Chi-Square Test. I’m not entirely sure though…

The good thing is is that it was able to recognize that it messed up and explained why, so that’s a good sign.

I was thinking the same but if that is true, why it proceeded for good of fit test? anyways, I have selected 5 statistical test, which are mostly used in our professional field, and will run all those test @ julius with dataset. I will also write about them. I already wrote about two of the, but in Bengali,will translate them soon and post here.

1 Like

It was probably just following the recommendation you gave it to run the Chi-Square goodness of fit, so I proceeded with that command instead. That would be my best guess.
Awesome, look forward to seeing your posts!

Hi @Matt, here is another issue i found today while working at julius,


My entire conversation : https://julius.ai/s/271d3b91-74d8-4977-8ffd-bdcfe7dc429d
Is julius really not capable to build this model?

Todays issue : 21 May 2024

numpy versions can be a bit brittle; can you expand the error it ran into? (inside of the show code there should be a results that show the actual error)

Here is the details can you please check :

I tried several times but it is a failed, you can check my previous link

Issue reported : 22 May 024
2


I think julius has some serious technical issue working with big data, i faced this issue several times when building a model, can you please look into it ? Suggesting split the data is not a solution, because we mostly handle the big data for a better analysis.

1 Like