添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
Databricks Community

Code:

Writer.jdbc_writer("Economy",economy,conf=CONF.MSSQL.to_dict(), modified_by=JOB_ID['Economy'])

The problem arises when i try to run the code, in the specified databricks notebook, An error of "ValueError: not enough values to unpack (expected 2, got 1)",

here's the full error message:

ValueError: not enough values to unpack (expected 2, got 1)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<command-75945229> in <cell line: 1>()
----> 1 Writer.jdbc_writer("Economy",economy,conf=CONF.MSSQL.to_dict(), modified_by=JOB_ID['Economy'])
<command-75945229> in jdbc_writer(table_name, df, conf, debug, modified_by)
     15       conf = conf.to_dict()
---> 17     schema, table = table_name.split('.')
     18     schema = schema[1:-1] if schema[0] == "[" else schema
     19     table = table[1:-1] if table[0] == "[" else table

And when I clicked the cell, this is the line of code:

class Writer:
  @staticmethod
  def jdbc_writer(table_name:str, 
                  df:SparkDataFrame, 
                  conf:Union[dict ,SqlConnect ], 
                  debug=False, 
                  modified_by = None,
                 ) -> No

I have searched for solutions regarding this particular problem but have never seem to find it, and your help would really benefit me.

Hello, Thank you for reaching out to us.

This looks like a general error message, Can you please share the Runtime version of the cluster that you are running the notebook on? You can find this detail under cluster configuration.

Also, Hav you checked this article?

https://stackoverflow.com/questions/52108914/python-how-to-fix-valueerror-not-enough-values-to-unpac...

@Jillinie Park​ :

The error message you are seeing ("ValueError: not enough values to unpack (expected 2, got 1)") occurs when you try to unpack an iterable object into too few variables. In your case, the error is happening on this line of code:

schema, table = table_name.split('.')

Here, you are trying to unpack the results of the split() method into two variables (schema and table), but it looks like the split() method is only returning one value instead of two. To fix this error, you can check the value of table_name before calling the split() method to make sure it contains a dot (.) character. If it doesn't, you can handle the error accordingly (e.g. raise an exception or return an error message).

Here's an example of how you could modify the jdbc_writer() method to handle this error:

class Writer:
  @staticmethod
  def jdbc_writer(table_name:str, 
                  df:SparkDataFrame, 
                  conf:Union[dict ,SqlConnect ], 
                  debug=False, 
                  modified_by=None) -> No:
    if '.' not in table_name:
        raise ValueError(f"Invalid table name '{table_name}'. Table name should be in the format 'schema.table'.")
    if isinstance(conf, SqlConnect):
        conf = conf.to_dict()
    schema, table = table_name.split('.')
    schema = schema[1:-1] if schema[0] == "[" else schema
    table = table[1:-1] if table[0] == "[" else table
    # rest of the code goes here

In this modified version of the jdbc_writer() method, we first check if the table_name argument contains a dot (.) character. If it doesn't, we raise a ValueError with an appropriate error message. Otherwise, we proceed with the rest of the method as before.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group ! If there isn’t one near you, fill out this form and we’ll create one for you to join!

UC_COMMAND_NOT_SUPPORTED.WITHOUT_RECOMMENDATION in shared access mode in Data Engineering Error "Distributed package doesn't have nccl built in" with Transformers Library. in Data Engineering Nested struct type not supported pyspark error in Data Engineering Library installation in cluster taking a long time in Data Engineering © Databricks 2024. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.
  • Privacy Notice
  • Terms of Use
  • Your Privacy Choices
  • Your California Privacy Rights
  •