Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

'KMeansModel' object has no attribute 'computeCost' in apache pyspark

I'm experimenting with a clustering model in pyspark. I'm trying to get the mean squared cost of the cluster fit for different values of K

def meanScore(k,df):
  inputCol = df.columns[:38]
  assembler = VectorAssembler(inputCols=inputCols,outputCol="features")
  kmeans = KMeans().setK(k)
  pipeModel2 = Pipeline(stages=[assembler,kmeans])
  kmeansModel = pipeModel2.fit(df).stages[-1]
  kmeansModel.computeCost(assembler.transform(df))/data.count()

When I try to call this function to compute costs for different values of K in the dataframe

for k in range(20,100,20):
  sc = meanScore(k,numericOnly)
  print((k,sc))

I receive an attribute error as AttributeError: 'KMeansModel' object has no attribute 'computeCost'

I'm fairly new to pyspark and am just learning, I sincerely appreciate any help with this. Thanks

like image 276
kausik sivakumar Avatar asked Oct 17 '25 06:10

kausik sivakumar


1 Answers

As Erkan sirin mentioned computeCost is deprecated in recent version this may help you solve your problem

# Make predictions 
predictions = model.transform(dataset)
from pyspark.ml.evaluation import ClusteringEvaluator
# Evaluate clustering by computing Silhouette score
evaluator = ClusteringEvaluator()
silhouette = evaluator.evaluate(predictions)
print("Silhouette with squared euclidean distance = " + str(silhouette))

I hope this helps, you can check official docs for more informations

like image 138
Dhouibi iheb Avatar answered Oct 18 '25 20:10

Dhouibi iheb



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!