snowflake.snowpark.DataFrameReader.table¶

DataFrameReader.table(name: Union[str, Iterable[str]], *, time_travel_mode: Optional[Literal['at', 'before']] = None, statement: Optional[str] = None, offset: Optional[int] = None, timestamp: Optional[Union[str, datetime]] = None, timestamp_type: Optional[Union[str, TimestampTimeZone]] = None, stream: Optional[str] = None) → Table[source]¶

Returns a Table that points to the specified table.

This method is an alias of table() with additional support for setting time travel options via the option() method.

Parameters:
  • name – Name of the table to use.

  • time_travel_mode – Time travel mode, either ‘at’ or ‘before’. Can also be set via option("time_travel_mode", "at").

  • statement – Query ID for time travel. Can also be set via option("statement", "query_id").

  • offset – Negative integer representing seconds in the past for time travel. Can also be set via option("offset", -60).

  • timestamp – Timestamp string or datetime object for time travel. Can also be set via option("timestamp", "2023-01-01 12:00:00") or option("as-of-timestamp", "2023-01-01 12:00:00").

  • timestamp_type – Type of timestamp interpretation (‘NTZ’, ‘LTZ’, or ‘TZ’). Can also be set via option("timestamp_type", "LTZ").

  • stream – Stream name for time travel. Can also be set via option("stream", "stream_name").

Note

Time travel options can be set either as direct parameters or via the option() method, but NOT both. If any direct time travel parameter is provided, all time travel options will be ignored to avoid conflicts.

PySpark Compatibility: The as-of-timestamp option automatically sets time_travel_mode="at" and cannot be used with time_travel_mode="before".

Examples:

# Using direct parameters
>>> table = session.read.table("my_table", time_travel_mode="at", offset=-3600)  # doctest: +SKIP

# Using options (recommended for chaining)
>>> table = (session.read  # doctest: +SKIP
...     .option("time_travel_mode", "at")
...     .option("offset", -3600)
...     .table("my_table"))

# PySpark-style as-of-timestamp (automatically sets mode to "at")
>>> table = session.read.option("as-of-timestamp", "2023-01-01 12:00:00").table("my_table")  # doctest: +SKIP

# timestamp_type automatically set to "TZ" due to timezone info
>>> import datetime, pytz  # doctest: +SKIP
>>> tz_aware = datetime.datetime(2023, 1, 1, 12, 0, 0, tzinfo=pytz.UTC)  # doctest: +SKIP
>>> table1 = session.read.table("my_table", time_travel_mode="at", timestamp=tz_aware)  # doctest: +SKIP

# timestamp_type remains "NTZ" (user's explicit choice respected)
>>> table2 = session.read.table("my_table", time_travel_mode="at", timestamp=tz_aware, timestamp_type="NTZ")  # doctest: +SKIP

# Mixing options and parameters (direct parameters completely override options)
>>> table = (session.read  # doctest: +SKIP
...     .option("time_travel_mode", "before")  # This will be IGNORED
...     .option("offset", -60)                 # This will be IGNORED
...     .table("my_table", time_travel_mode="at", offset=-3600))  # Only this is used
Copy