Inconsistent Chi-Square Test Results Using Julius AI

Hello,
I have encountered an issue related to the Julius calculations, specifically when performing a chi-square test on a unique dataset of mine. I’ve tested the same dataset and prompt across multiple tools, but I’m getting conflicting results.

Here’s the breakdown:

SPSS and DeepSeek provided consistent results: χ² = 0.388, p = 0.533.
On Julius AI, using both the ChatGPT and Claude engines, I got: χ² = 0.097, p = 0.755.

I have carefully ensured that the dataset and methodology remain exactly the same, and I even ran the test multiple times to eliminate any human error.

Could anyone please clarify why this discrepancy is occurring? Is it related to the underlying statistical engine or any specific implementation details in Julius AI?

I look forward to your guidance on resolving this.

Thank you in advance.

Very interesting ali.poursanati. I’d love to hear what the answer is. This is such a basic stat and the difference is profound.