PPOL 5203 Data Science I: Foundations

Writing/Loading and Previewing Data in Pandas

Tiago Ventura


In this Notebook we cover

Pandas methods for:

  • Loading data
  • Saving data
  • Data Conversion
  • Previewing your Pandas DataFrame

Setup

In this notebook, we will work with the Fifa World Cup data set hosted on Kaggle.

Download the data in our websire or in Kaggle. Then:

  • Save in a folder you can access from this notebook
  • Or save in the same folder of the notebook (your working directory)
In [24]:
# import modules
import pandas as pd
import numpy as np

Data in and Data out in Pandas

In our class on file management, we saw how to use connection managament tools in Python (open(), close(), with()) to load data stored locally into our Python environments. That process usually involved accessing a locally stored data row by row, and import the data a nested container (list or dictionary).

Today, we will see the use of high-level functions from Pandas that facilitate the process of loading data into our Python environment. We will focus on data input and output using pandas, though there are numerous tools in other libraries to help with reading and writing data in various formats.

pandas methods

pandas contains a variety of methods for reading in various data types.

Format Type Data Description Reader Writer Note
text CSV read_csv to_csv
text JSON read_json to_json
text HTML read_html to_html
text Local clipboard read_clipboard to_clipboard
binary MS Excel read_excel to_excel need the xlwt module
binary HDF5 Format read_hdf to_hdf
binary Feather Format read_feather to_feather
binary Parquet Format read_parquet to_parquet
binary Msgpack read_msgpack to_msgpack
binary Stata read_stata to_stata
binary SAS read_sas
binary Python Pickle Format read_pickle to_pickle
SQL SQL read_sql to_sql
SQL Google Big Query read_gbq to_gbq

Read more about all the input/output methods here.

Data in with pandas

As you can see, the purposes of each function is intuitive. For example:

pandas.read_csv(): to open flat files

In [58]:
# read a csv 
d = pd.read_csv("WorldCupMatches.csv")
In [59]:
d.head()
Out[59]:
Year Datetime Stage Stadium City Home Team Name Home Team Goals Away Team Goals Away Team Name Win conditions Attendance Half-time Home Goals Half-time Away Goals Referee Assistant 1 Assistant 2 RoundID MatchID Home Team Initials Away Team Initials
0 1930 13 Jul 1930 - 15:00 Group 1 Pocitos Montevideo France 4 1 Mexico 4444.0 3 0 LOMBARDI Domingo (URU) CRISTOPHE Henry (BEL) REGO Gilberto (BRA) 201 1096 FRA MEX
1 1930 13 Jul 1930 - 15:00 Group 4 Parque Central Montevideo USA 3 0 Belgium 18346.0 2 0 MACIAS Jose (ARG) MATEUCCI Francisco (URU) WARNKEN Alberto (CHI) 201 1090 USA BEL
2 1930 14 Jul 1930 - 12:45 Group 2 Parque Central Montevideo Yugoslavia 2 1 Brazil 24059.0 2 0 TEJADA Anibal (URU) VALLARINO Ricardo (URU) BALWAY Thomas (FRA) 201 1093 YUG BRA
3 1930 14 Jul 1930 - 14:50 Group 3 Pocitos Montevideo Romania 3 1 Peru 2549.0 1 0 WARNKEN Alberto (CHI) LANGENUS Jean (BEL) MATEUCCI Francisco (URU) 201 1098 ROU PER
4 1930 15 Jul 1930 - 16:00 Group 1 Parque Central Montevideo Argentina 1 0 France 23409.0 0 0 REGO Gilberto (BRA) SAUCEDO Ulises (BOL) RADULESCU Constantin (ROU) 201 1085 ARG FRA

Exploring Arguments

pandas loading functions are highly customizable. For example, check the documentation of pandas.read_csv()

In [30]:
# asking for help
help(pd.read_csv)
Help on function read_csv in module pandas.io.parsers.readers:

read_csv(filepath_or_buffer: 'FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str]', *, sep: 'str | None | lib.NoDefault' = <no_default>, delimiter: 'str | None | lib.NoDefault' = None, header: "int | Sequence[int] | None | Literal['infer']" = 'infer', names: 'Sequence[Hashable] | None | lib.NoDefault' = <no_default>, index_col: 'IndexLabel | Literal[False] | None' = None, usecols=None, squeeze: 'bool | None' = None, prefix: 'str | lib.NoDefault' = <no_default>, mangle_dupe_cols: 'bool' = True, dtype: 'DtypeArg | None' = None, engine: 'CSVEngine | None' = None, converters=None, true_values=None, false_values=None, skipinitialspace: 'bool' = False, skiprows=None, skipfooter: 'int' = 0, nrows: 'int | None' = None, na_values=None, keep_default_na: 'bool' = True, na_filter: 'bool' = True, verbose: 'bool' = False, skip_blank_lines: 'bool' = True, parse_dates=None, infer_datetime_format: 'bool' = False, keep_date_col: 'bool' = False, date_parser=None, dayfirst: 'bool' = False, cache_dates: 'bool' = True, iterator: 'bool' = False, chunksize: 'int | None' = None, compression: 'CompressionOptions' = 'infer', thousands: 'str | None' = None, decimal: 'str' = '.', lineterminator: 'str | None' = None, quotechar: 'str' = '"', quoting: 'int' = 0, doublequote: 'bool' = True, escapechar: 'str | None' = None, comment: 'str | None' = None, encoding: 'str | None' = None, encoding_errors: 'str | None' = 'strict', dialect: 'str | csv.Dialect | None' = None, error_bad_lines: 'bool | None' = None, warn_bad_lines: 'bool | None' = None, on_bad_lines=None, delim_whitespace: 'bool' = False, low_memory=True, memory_map: 'bool' = False, float_precision: "Literal['high', 'legacy'] | None" = None, storage_options: 'StorageOptions' = None) -> 'DataFrame | TextFileReader'
    Read a comma-separated values (csv) file into DataFrame.
    
    Also supports optionally iterating or breaking of the file
    into chunks.
    
    Additional help can be found in the online docs for
    `IO Tools <https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html>`_.
    
    Parameters
    ----------
    filepath_or_buffer : str, path object or file-like object
        Any valid string path is acceptable. The string could be a URL. Valid
        URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is
        expected. A local file could be: file://localhost/path/to/table.csv.
    
        If you want to pass in a path object, pandas accepts any ``os.PathLike``.
    
        By file-like object, we refer to objects with a ``read()`` method, such as
        a file handle (e.g. via builtin ``open`` function) or ``StringIO``.
    sep : str, default ','
        Delimiter to use. If sep is None, the C engine cannot automatically detect
        the separator, but the Python parsing engine can, meaning the latter will
        be used and automatically detect the separator by Python's builtin sniffer
        tool, ``csv.Sniffer``. In addition, separators longer than 1 character and
        different from ``'\s+'`` will be interpreted as regular expressions and
        will also force the use of the Python parsing engine. Note that regex
        delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``.
    delimiter : str, default ``None``
        Alias for sep.
    header : int, list of int, None, default 'infer'
        Row number(s) to use as the column names, and the start of the
        data.  Default behavior is to infer the column names: if no names
        are passed the behavior is identical to ``header=0`` and column
        names are inferred from the first line of the file, if column
        names are passed explicitly then the behavior is identical to
        ``header=None``. Explicitly pass ``header=0`` to be able to
        replace existing names. The header can be a list of integers that
        specify row locations for a multi-index on the columns
        e.g. [0,1,3]. Intervening rows that are not specified will be
        skipped (e.g. 2 in this example is skipped). Note that this
        parameter ignores commented lines and empty lines if
        ``skip_blank_lines=True``, so ``header=0`` denotes the first line of
        data rather than the first line of the file.
    names : array-like, optional
        List of column names to use. If the file contains a header row,
        then you should explicitly pass ``header=0`` to override the column names.
        Duplicates in this list are not allowed.
    index_col : int, str, sequence of int / str, or False, optional, default ``None``
      Column(s) to use as the row labels of the ``DataFrame``, either given as
      string name or column index. If a sequence of int / str is given, a
      MultiIndex is used.
    
      Note: ``index_col=False`` can be used to force pandas to *not* use the first
      column as the index, e.g. when you have a malformed file with delimiters at
      the end of each line.
    usecols : list-like or callable, optional
        Return a subset of the columns. If list-like, all elements must either
        be positional (i.e. integer indices into the document columns) or strings
        that correspond to column names provided either by the user in `names` or
        inferred from the document header row(s). If ``names`` are given, the document
        header row(s) are not taken into account. For example, a valid list-like
        `usecols` parameter would be ``[0, 1, 2]`` or ``['foo', 'bar', 'baz']``.
        Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.
        To instantiate a DataFrame from ``data`` with element order preserved use
        ``pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']]`` for columns
        in ``['foo', 'bar']`` order or
        ``pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]``
        for ``['bar', 'foo']`` order.
    
        If callable, the callable function will be evaluated against the column
        names, returning names where the callable function evaluates to True. An
        example of a valid callable argument would be ``lambda x: x.upper() in
        ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
        parsing time and lower memory usage.
    squeeze : bool, default False
        If the parsed data only contains one column then return a Series.
    
        .. deprecated:: 1.4.0
            Append ``.squeeze("columns")`` to the call to ``read_csv`` to squeeze
            the data.
    prefix : str, optional
        Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ...
    
        .. deprecated:: 1.4.0
           Use a list comprehension on the DataFrame's columns after calling ``read_csv``.
    mangle_dupe_cols : bool, default True
        Duplicate columns will be specified as 'X', 'X.1', ...'X.N', rather than
        'X'...'X'. Passing in False will cause data to be overwritten if there
        are duplicate names in the columns.
    
        .. deprecated:: 1.5.0
            Not implemented, and a new argument to specify the pattern for the
            names of duplicated columns will be added instead
    dtype : Type name or dict of column -> type, optional
        Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32,
        'c': 'Int64'}
        Use `str` or `object` together with suitable `na_values` settings
        to preserve and not interpret dtype.
        If converters are specified, they will be applied INSTEAD
        of dtype conversion.
    
        .. versionadded:: 1.5.0
    
            Support for defaultdict was added. Specify a defaultdict as input where
            the default determines the dtype of the columns which are not explicitly
            listed.
    engine : {'c', 'python', 'pyarrow'}, optional
        Parser engine to use. The C and pyarrow engines are faster, while the python engine
        is currently more feature-complete. Multithreading is currently only supported by
        the pyarrow engine.
    
        .. versionadded:: 1.4.0
    
            The "pyarrow" engine was added as an *experimental* engine, and some features
            are unsupported, or may not work correctly, with this engine.
    converters : dict, optional
        Dict of functions for converting values in certain columns. Keys can either
        be integers or column labels.
    true_values : list, optional
        Values to consider as True.
    false_values : list, optional
        Values to consider as False.
    skipinitialspace : bool, default False
        Skip spaces after delimiter.
    skiprows : list-like, int or callable, optional
        Line numbers to skip (0-indexed) or number of lines to skip (int)
        at the start of the file.
    
        If callable, the callable function will be evaluated against the row
        indices, returning True if the row should be skipped and False otherwise.
        An example of a valid callable argument would be ``lambda x: x in [0, 2]``.
    skipfooter : int, default 0
        Number of lines at bottom of file to skip (Unsupported with engine='c').
    nrows : int, optional
        Number of rows of file to read. Useful for reading pieces of large files.
    na_values : scalar, str, list-like, or dict, optional
        Additional strings to recognize as NA/NaN. If dict passed, specific
        per-column NA values.  By default the following values are interpreted as
        NaN: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan',
        '1.#IND', '1.#QNAN', '<NA>', 'N/A', 'NA', 'NULL', 'NaN', 'n/a',
        'nan', 'null'.
    keep_default_na : bool, default True
        Whether or not to include the default NaN values when parsing the data.
        Depending on whether `na_values` is passed in, the behavior is as follows:
    
        * If `keep_default_na` is True, and `na_values` are specified, `na_values`
          is appended to the default NaN values used for parsing.
        * If `keep_default_na` is True, and `na_values` are not specified, only
          the default NaN values are used for parsing.
        * If `keep_default_na` is False, and `na_values` are specified, only
          the NaN values specified `na_values` are used for parsing.
        * If `keep_default_na` is False, and `na_values` are not specified, no
          strings will be parsed as NaN.
    
        Note that if `na_filter` is passed in as False, the `keep_default_na` and
        `na_values` parameters will be ignored.
    na_filter : bool, default True
        Detect missing value markers (empty strings and the value of na_values). In
        data without any NAs, passing na_filter=False can improve the performance
        of reading a large file.
    verbose : bool, default False
        Indicate number of NA values placed in non-numeric columns.
    skip_blank_lines : bool, default True
        If True, skip over blank lines rather than interpreting as NaN values.
    parse_dates : bool or list of int or names or list of lists or dict, default False
        The behavior is as follows:
    
        * boolean. If True -> try parsing the index.
        * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
          each as a separate date column.
        * list of lists. e.g.  If [[1, 3]] -> combine columns 1 and 3 and parse as
          a single date column.
        * dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call
          result 'foo'
    
        If a column or index cannot be represented as an array of datetimes,
        say because of an unparsable value or a mixture of timezones, the column
        or index will be returned unaltered as an object data type. For
        non-standard datetime parsing, use ``pd.to_datetime`` after
        ``pd.read_csv``. To parse an index or column with a mixture of timezones,
        specify ``date_parser`` to be a partially-applied
        :func:`pandas.to_datetime` with ``utc=True``. See
        :ref:`io.csv.mixed_timezones` for more.
    
        Note: A fast-path exists for iso8601-formatted dates.
    infer_datetime_format : bool, default False
        If True and `parse_dates` is enabled, pandas will attempt to infer the
        format of the datetime strings in the columns, and if it can be inferred,
        switch to a faster method of parsing them. In some cases this can increase
        the parsing speed by 5-10x.
    keep_date_col : bool, default False
        If True and `parse_dates` specifies combining multiple columns then
        keep the original columns.
    date_parser : function, optional
        Function to use for converting a sequence of string columns to an array of
        datetime instances. The default uses ``dateutil.parser.parser`` to do the
        conversion. Pandas will try to call `date_parser` in three different ways,
        advancing to the next if an exception occurs: 1) Pass one or more arrays
        (as defined by `parse_dates`) as arguments; 2) concatenate (row-wise) the
        string values from the columns defined by `parse_dates` into a single array
        and pass that; and 3) call `date_parser` once for each row using one or
        more strings (corresponding to the columns defined by `parse_dates`) as
        arguments.
    dayfirst : bool, default False
        DD/MM format dates, international and European format.
    cache_dates : bool, default True
        If True, use a cache of unique, converted dates to apply the datetime
        conversion. May produce significant speed-up when parsing duplicate
        date strings, especially ones with timezone offsets.
    
        .. versionadded:: 0.25.0
    iterator : bool, default False
        Return TextFileReader object for iteration or getting chunks with
        ``get_chunk()``.
    
        .. versionchanged:: 1.2
    
           ``TextFileReader`` is a context manager.
    chunksize : int, optional
        Return TextFileReader object for iteration.
        See the `IO Tools docs
        <https://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_
        for more information on ``iterator`` and ``chunksize``.
    
        .. versionchanged:: 1.2
    
           ``TextFileReader`` is a context manager.
    compression : str or dict, default 'infer'
        For on-the-fly decompression of on-disk data. If 'infer' and 'filepath_or_buffer' is
        path-like, then detect compression from the following extensions: '.gz',
        '.bz2', '.zip', '.xz', '.zst', '.tar', '.tar.gz', '.tar.xz' or '.tar.bz2'
        (otherwise no compression).
        If using 'zip' or 'tar', the ZIP file must contain only one data file to be read in.
        Set to ``None`` for no decompression.
        Can also be a dict with key ``'method'`` set
        to one of {``'zip'``, ``'gzip'``, ``'bz2'``, ``'zstd'``, ``'tar'``} and other
        key-value pairs are forwarded to
        ``zipfile.ZipFile``, ``gzip.GzipFile``,
        ``bz2.BZ2File``, ``zstandard.ZstdDecompressor`` or
        ``tarfile.TarFile``, respectively.
        As an example, the following could be passed for Zstandard decompression using a
        custom compression dictionary:
        ``compression={'method': 'zstd', 'dict_data': my_compression_dict}``.
    
            .. versionadded:: 1.5.0
                Added support for `.tar` files.
    
        .. versionchanged:: 1.4.0 Zstandard support.
    
    thousands : str, optional
        Thousands separator.
    decimal : str, default '.'
        Character to recognize as decimal point (e.g. use ',' for European data).
    lineterminator : str (length 1), optional
        Character to break file into lines. Only valid with C parser.
    quotechar : str (length 1), optional
        The character used to denote the start and end of a quoted item. Quoted
        items can include the delimiter and it will be ignored.
    quoting : int or csv.QUOTE_* instance, default 0
        Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of
        QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
    doublequote : bool, default ``True``
       When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate
       whether or not to interpret two consecutive quotechar elements INSIDE a
       field as a single ``quotechar`` element.
    escapechar : str (length 1), optional
        One-character string used to escape other characters.
    comment : str, optional
        Indicates remainder of line should not be parsed. If found at the beginning
        of a line, the line will be ignored altogether. This parameter must be a
        single character. Like empty lines (as long as ``skip_blank_lines=True``),
        fully commented lines are ignored by the parameter `header` but not by
        `skiprows`. For example, if ``comment='#'``, parsing
        ``#empty\na,b,c\n1,2,3`` with ``header=0`` will result in 'a,b,c' being
        treated as the header.
    encoding : str, optional
        Encoding to use for UTF when reading/writing (ex. 'utf-8'). `List of Python
        standard encodings
        <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ .
    
        .. versionchanged:: 1.2
    
           When ``encoding`` is ``None``, ``errors="replace"`` is passed to
           ``open()``. Otherwise, ``errors="strict"`` is passed to ``open()``.
           This behavior was previously only the case for ``engine="python"``.
    
        .. versionchanged:: 1.3.0
    
           ``encoding_errors`` is a new argument. ``encoding`` has no longer an
           influence on how encoding errors are handled.
    
    encoding_errors : str, optional, default "strict"
        How encoding errors are treated. `List of possible values
        <https://docs.python.org/3/library/codecs.html#error-handlers>`_ .
    
        .. versionadded:: 1.3.0
    
    dialect : str or csv.Dialect, optional
        If provided, this parameter will override values (default or not) for the
        following parameters: `delimiter`, `doublequote`, `escapechar`,
        `skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to
        override values, a ParserWarning will be issued. See csv.Dialect
        documentation for more details.
    error_bad_lines : bool, optional, default ``None``
        Lines with too many fields (e.g. a csv line with too many commas) will by
        default cause an exception to be raised, and no DataFrame will be returned.
        If False, then these "bad lines" will be dropped from the DataFrame that is
        returned.
    
        .. deprecated:: 1.3.0
           The ``on_bad_lines`` parameter should be used instead to specify behavior upon
           encountering a bad line instead.
    warn_bad_lines : bool, optional, default ``None``
        If error_bad_lines is False, and warn_bad_lines is True, a warning for each
        "bad line" will be output.
    
        .. deprecated:: 1.3.0
           The ``on_bad_lines`` parameter should be used instead to specify behavior upon
           encountering a bad line instead.
    on_bad_lines : {'error', 'warn', 'skip'} or callable, default 'error'
        Specifies what to do upon encountering a bad line (a line with too many fields).
        Allowed values are :
    
            - 'error', raise an Exception when a bad line is encountered.
            - 'warn', raise a warning when a bad line is encountered and skip that line.
            - 'skip', skip bad lines without raising or warning when they are encountered.
    
        .. versionadded:: 1.3.0
    
        .. versionadded:: 1.4.0
    
            - callable, function with signature
              ``(bad_line: list[str]) -> list[str] | None`` that will process a single
              bad line. ``bad_line`` is a list of strings split by the ``sep``.
              If the function returns ``None``, the bad line will be ignored.
              If the function returns a new list of strings with more elements than
              expected, a ``ParserWarning`` will be emitted while dropping extra elements.
              Only supported when ``engine="python"``
    
    delim_whitespace : bool, default False
        Specifies whether or not whitespace (e.g. ``' '`` or ``'    '``) will be
        used as the sep. Equivalent to setting ``sep='\s+'``. If this option
        is set to True, nothing should be passed in for the ``delimiter``
        parameter.
    low_memory : bool, default True
        Internally process the file in chunks, resulting in lower memory use
        while parsing, but possibly mixed type inference.  To ensure no mixed
        types either set False, or specify the type with the `dtype` parameter.
        Note that the entire file is read into a single DataFrame regardless,
        use the `chunksize` or `iterator` parameter to return the data in chunks.
        (Only valid with C parser).
    memory_map : bool, default False
        If a filepath is provided for `filepath_or_buffer`, map the file object
        directly onto memory and access the data directly from there. Using this
        option can improve performance because there is no longer any I/O overhead.
    float_precision : str, optional
        Specifies which converter the C engine should use for floating-point
        values. The options are ``None`` or 'high' for the ordinary converter,
        'legacy' for the original lower precision pandas converter, and
        'round_trip' for the round-trip converter.
    
        .. versionchanged:: 1.2
    
    storage_options : dict, optional
        Extra options that make sense for a particular storage connection, e.g.
        host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
        are forwarded to ``urllib.request.Request`` as header options. For other
        URLs (e.g. starting with "s3://", and "gcs://") the key-value pairs are
        forwarded to ``fsspec.open``. Please see ``fsspec`` and ``urllib`` for more
        details, and for more examples on storage options refer `here
        <https://pandas.pydata.org/docs/user_guide/io.html?
        highlight=storage_options#reading-writing-remote-files>`_.
    
        .. versionadded:: 1.2
    
    Returns
    -------
    DataFrame or TextParser
        A comma-separated values (csv) file is returned as two-dimensional
        data structure with labeled axes.
    
    See Also
    --------
    DataFrame.to_csv : Write DataFrame to a comma-separated values (csv) file.
    read_csv : Read a comma-separated values (csv) file into DataFrame.
    read_fwf : Read a table of fixed-width formatted lines into DataFrame.
    
    Examples
    --------
    >>> pd.read_csv('data.csv')  # doctest: +SKIP

Data out with pandas

All the same methods provided to load, also exists for converting and writing (locally) Pandas Dataframes.

For example:

In [31]:
# export as stata file
d.to_stata("worldcupmatches.dta",  version=118)
/var/folders/jy/10_nyhkn3nv_rrbnd8f_fr940000gp/T/ipykernel_50502/3669814853.py:2: InvalidColumnName: 
Not all pandas column names were valid Stata variable names.
The following replacements have been made:

    Home Team Name   ->   Home_Team_Name
    Home Team Goals   ->   Home_Team_Goals
    Away Team Goals   ->   Away_Team_Goals
    Away Team Name   ->   Away_Team_Name
    Win conditions   ->   Win_conditions
    Half-time Home Goals   ->   Half_time_Home_Goals
    Half-time Away Goals   ->   Half_time_Away_Goals
    Assistant 1   ->   Assistant_1
    Assistant 2   ->   Assistant_2
    Home Team Initials   ->   Home_Team_Initials
    Away Team Initials   ->   Away_Team_Initials

If this is not what you expect, please make sure you have Stata-compliant
column names in your DataFrame (strings only, max 32 characters, only
alphanumerics and underscores, no Stata reserved words)

  d.to_stata("worldcupmatches.dta",  version=118)
In [32]:
# load back again
d_stata = pd.read_stata("worldcupmatches.dta")
In [33]:
# see the data
d_stata.head()
Out[33]:
index Year Datetime Stage Stadium City Home_Team_Name Home_Team_Goals Away_Team_Goals Away_Team_Name ... Attendance Half_time_Home_Goals Half_time_Away_Goals Referee Assistant_1 Assistant_2 RoundID MatchID Home_Team_Initials Away_Team_Initials
0 0 1930 13 Jul 1930 - 15:00 Group 1 Pocitos Montevideo France 4 1 Mexico ... 4444.0 3 0 LOMBARDI Domingo (URU) CRISTOPHE Henry (BEL) REGO Gilberto (BRA) 201 1096 FRA MEX
1 1 1930 13 Jul 1930 - 15:00 Group 4 Parque Central Montevideo USA 3 0 Belgium ... 18346.0 2 0 MACIAS Jose (ARG) MATEUCCI Francisco (URU) WARNKEN Alberto (CHI) 201 1090 USA BEL
2 2 1930 14 Jul 1930 - 12:45 Group 2 Parque Central Montevideo Yugoslavia 2 1 Brazil ... 24059.0 2 0 TEJADA Anibal (URU) VALLARINO Ricardo (URU) BALWAY Thomas (FRA) 201 1093 YUG BRA
3 3 1930 14 Jul 1930 - 14:50 Group 3 Pocitos Montevideo Romania 3 1 Peru ... 2549.0 1 0 WARNKEN Alberto (CHI) LANGENUS Jean (BEL) MATEUCCI Francisco (URU) 201 1098 ROU PER
4 4 1930 15 Jul 1930 - 16:00 Group 1 Parque Central Montevideo Argentina 1 0 France ... 23409.0 0 0 REGO Gilberto (BRA) SAUCEDO Ulises (BOL) RADULESCU Constantin (ROU) 201 1085 ARG FRA

5 rows × 21 columns

In [34]:
# to csv
d_stata.to_csv("wordlcupmatches_.csv")

Practice

Explore the arguments of pd.read_csv() methods. Open the WorldCupMatches.csv with the following options:

  • using comma as separator,
  • indexing by year,
  • selecting only a smaller set of columns
  • open only 10 rows after skipping the first 50
  • parsing all dates as datetimes
In [14]:
help(pd.read_csv)
Help on function read_csv in module pandas.io.parsers.readers:

read_csv(filepath_or_buffer: 'FilePath | ReadCsvBuffer[bytes] | ReadCsvBuffer[str]', *, sep: 'str | None | lib.NoDefault' = <no_default>, delimiter: 'str | None | lib.NoDefault' = None, header: "int | Sequence[int] | None | Literal['infer']" = 'infer', names: 'Sequence[Hashable] | None | lib.NoDefault' = <no_default>, index_col: 'IndexLabel | Literal[False] | None' = None, usecols=None, squeeze: 'bool | None' = None, prefix: 'str | lib.NoDefault' = <no_default>, mangle_dupe_cols: 'bool' = True, dtype: 'DtypeArg | None' = None, engine: 'CSVEngine | None' = None, converters=None, true_values=None, false_values=None, skipinitialspace: 'bool' = False, skiprows=None, skipfooter: 'int' = 0, nrows: 'int | None' = None, na_values=None, keep_default_na: 'bool' = True, na_filter: 'bool' = True, verbose: 'bool' = False, skip_blank_lines: 'bool' = True, parse_dates=None, infer_datetime_format: 'bool' = False, keep_date_col: 'bool' = False, date_parser=None, dayfirst: 'bool' = False, cache_dates: 'bool' = True, iterator: 'bool' = False, chunksize: 'int | None' = None, compression: 'CompressionOptions' = 'infer', thousands: 'str | None' = None, decimal: 'str' = '.', lineterminator: 'str | None' = None, quotechar: 'str' = '"', quoting: 'int' = 0, doublequote: 'bool' = True, escapechar: 'str | None' = None, comment: 'str | None' = None, encoding: 'str | None' = None, encoding_errors: 'str | None' = 'strict', dialect: 'str | csv.Dialect | None' = None, error_bad_lines: 'bool | None' = None, warn_bad_lines: 'bool | None' = None, on_bad_lines=None, delim_whitespace: 'bool' = False, low_memory=True, memory_map: 'bool' = False, float_precision: "Literal['high', 'legacy'] | None" = None, storage_options: 'StorageOptions' = None) -> 'DataFrame | TextFileReader'
    Read a comma-separated values (csv) file into DataFrame.
    
    Also supports optionally iterating or breaking of the file
    into chunks.
    
    Additional help can be found in the online docs for
    `IO Tools <https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html>`_.
    
    Parameters
    ----------
    filepath_or_buffer : str, path object or file-like object
        Any valid string path is acceptable. The string could be a URL. Valid
        URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is
        expected. A local file could be: file://localhost/path/to/table.csv.
    
        If you want to pass in a path object, pandas accepts any ``os.PathLike``.
    
        By file-like object, we refer to objects with a ``read()`` method, such as
        a file handle (e.g. via builtin ``open`` function) or ``StringIO``.
    sep : str, default ','
        Delimiter to use. If sep is None, the C engine cannot automatically detect
        the separator, but the Python parsing engine can, meaning the latter will
        be used and automatically detect the separator by Python's builtin sniffer
        tool, ``csv.Sniffer``. In addition, separators longer than 1 character and
        different from ``'\s+'`` will be interpreted as regular expressions and
        will also force the use of the Python parsing engine. Note that regex
        delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``.
    delimiter : str, default ``None``
        Alias for sep.
    header : int, list of int, None, default 'infer'
        Row number(s) to use as the column names, and the start of the
        data.  Default behavior is to infer the column names: if no names
        are passed the behavior is identical to ``header=0`` and column
        names are inferred from the first line of the file, if column
        names are passed explicitly then the behavior is identical to
        ``header=None``. Explicitly pass ``header=0`` to be able to
        replace existing names. The header can be a list of integers that
        specify row locations for a multi-index on the columns
        e.g. [0,1,3]. Intervening rows that are not specified will be
        skipped (e.g. 2 in this example is skipped). Note that this
        parameter ignores commented lines and empty lines if
        ``skip_blank_lines=True``, so ``header=0`` denotes the first line of
        data rather than the first line of the file.
    names : array-like, optional
        List of column names to use. If the file contains a header row,
        then you should explicitly pass ``header=0`` to override the column names.
        Duplicates in this list are not allowed.
    index_col : int, str, sequence of int / str, or False, optional, default ``None``
      Column(s) to use as the row labels of the ``DataFrame``, either given as
      string name or column index. If a sequence of int / str is given, a
      MultiIndex is used.
    
      Note: ``index_col=False`` can be used to force pandas to *not* use the first
      column as the index, e.g. when you have a malformed file with delimiters at
      the end of each line.
    usecols : list-like or callable, optional
        Return a subset of the columns. If list-like, all elements must either
        be positional (i.e. integer indices into the document columns) or strings
        that correspond to column names provided either by the user in `names` or
        inferred from the document header row(s). If ``names`` are given, the document
        header row(s) are not taken into account. For example, a valid list-like
        `usecols` parameter would be ``[0, 1, 2]`` or ``['foo', 'bar', 'baz']``.
        Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.
        To instantiate a DataFrame from ``data`` with element order preserved use
        ``pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']]`` for columns
        in ``['foo', 'bar']`` order or
        ``pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]``
        for ``['bar', 'foo']`` order.
    
        If callable, the callable function will be evaluated against the column
        names, returning names where the callable function evaluates to True. An
        example of a valid callable argument would be ``lambda x: x.upper() in
        ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
        parsing time and lower memory usage.
    squeeze : bool, default False
        If the parsed data only contains one column then return a Series.
    
        .. deprecated:: 1.4.0
            Append ``.squeeze("columns")`` to the call to ``read_csv`` to squeeze
            the data.
    prefix : str, optional
        Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ...
    
        .. deprecated:: 1.4.0
           Use a list comprehension on the DataFrame's columns after calling ``read_csv``.
    mangle_dupe_cols : bool, default True
        Duplicate columns will be specified as 'X', 'X.1', ...'X.N', rather than
        'X'...'X'. Passing in False will cause data to be overwritten if there
        are duplicate names in the columns.
    
        .. deprecated:: 1.5.0
            Not implemented, and a new argument to specify the pattern for the
            names of duplicated columns will be added instead
    dtype : Type name or dict of column -> type, optional
        Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32,
        'c': 'Int64'}
        Use `str` or `object` together with suitable `na_values` settings
        to preserve and not interpret dtype.
        If converters are specified, they will be applied INSTEAD
        of dtype conversion.
    
        .. versionadded:: 1.5.0
    
            Support for defaultdict was added. Specify a defaultdict as input where
            the default determines the dtype of the columns which are not explicitly
            listed.
    engine : {'c', 'python', 'pyarrow'}, optional
        Parser engine to use. The C and pyarrow engines are faster, while the python engine
        is currently more feature-complete. Multithreading is currently only supported by
        the pyarrow engine.
    
        .. versionadded:: 1.4.0
    
            The "pyarrow" engine was added as an *experimental* engine, and some features
            are unsupported, or may not work correctly, with this engine.
    converters : dict, optional
        Dict of functions for converting values in certain columns. Keys can either
        be integers or column labels.
    true_values : list, optional
        Values to consider as True.
    false_values : list, optional
        Values to consider as False.
    skipinitialspace : bool, default False
        Skip spaces after delimiter.
    skiprows : list-like, int or callable, optional
        Line numbers to skip (0-indexed) or number of lines to skip (int)
        at the start of the file.
    
        If callable, the callable function will be evaluated against the row
        indices, returning True if the row should be skipped and False otherwise.
        An example of a valid callable argument would be ``lambda x: x in [0, 2]``.
    skipfooter : int, default 0
        Number of lines at bottom of file to skip (Unsupported with engine='c').
    nrows : int, optional
        Number of rows of file to read. Useful for reading pieces of large files.
    na_values : scalar, str, list-like, or dict, optional
        Additional strings to recognize as NA/NaN. If dict passed, specific
        per-column NA values.  By default the following values are interpreted as
        NaN: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan',
        '1.#IND', '1.#QNAN', '<NA>', 'N/A', 'NA', 'NULL', 'NaN', 'n/a',
        'nan', 'null'.
    keep_default_na : bool, default True
        Whether or not to include the default NaN values when parsing the data.
        Depending on whether `na_values` is passed in, the behavior is as follows:
    
        * If `keep_default_na` is True, and `na_values` are specified, `na_values`
          is appended to the default NaN values used for parsing.
        * If `keep_default_na` is True, and `na_values` are not specified, only
          the default NaN values are used for parsing.
        * If `keep_default_na` is False, and `na_values` are specified, only
          the NaN values specified `na_values` are used for parsing.
        * If `keep_default_na` is False, and `na_values` are not specified, no
          strings will be parsed as NaN.
    
        Note that if `na_filter` is passed in as False, the `keep_default_na` and
        `na_values` parameters will be ignored.
    na_filter : bool, default True
        Detect missing value markers (empty strings and the value of na_values). In
        data without any NAs, passing na_filter=False can improve the performance
        of reading a large file.
    verbose : bool, default False
        Indicate number of NA values placed in non-numeric columns.
    skip_blank_lines : bool, default True
        If True, skip over blank lines rather than interpreting as NaN values.
    parse_dates : bool or list of int or names or list of lists or dict, default False
        The behavior is as follows:
    
        * boolean. If True -> try parsing the index.
        * list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
          each as a separate date column.
        * list of lists. e.g.  If [[1, 3]] -> combine columns 1 and 3 and parse as
          a single date column.
        * dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call
          result 'foo'
    
        If a column or index cannot be represented as an array of datetimes,
        say because of an unparsable value or a mixture of timezones, the column
        or index will be returned unaltered as an object data type. For
        non-standard datetime parsing, use ``pd.to_datetime`` after
        ``pd.read_csv``. To parse an index or column with a mixture of timezones,
        specify ``date_parser`` to be a partially-applied
        :func:`pandas.to_datetime` with ``utc=True``. See
        :ref:`io.csv.mixed_timezones` for more.
    
        Note: A fast-path exists for iso8601-formatted dates.
    infer_datetime_format : bool, default False
        If True and `parse_dates` is enabled, pandas will attempt to infer the
        format of the datetime strings in the columns, and if it can be inferred,
        switch to a faster method of parsing them. In some cases this can increase
        the parsing speed by 5-10x.
    keep_date_col : bool, default False
        If True and `parse_dates` specifies combining multiple columns then
        keep the original columns.
    date_parser : function, optional
        Function to use for converting a sequence of string columns to an array of
        datetime instances. The default uses ``dateutil.parser.parser`` to do the
        conversion. Pandas will try to call `date_parser` in three different ways,
        advancing to the next if an exception occurs: 1) Pass one or more arrays
        (as defined by `parse_dates`) as arguments; 2) concatenate (row-wise) the
        string values from the columns defined by `parse_dates` into a single array
        and pass that; and 3) call `date_parser` once for each row using one or
        more strings (corresponding to the columns defined by `parse_dates`) as
        arguments.
    dayfirst : bool, default False
        DD/MM format dates, international and European format.
    cache_dates : bool, default True
        If True, use a cache of unique, converted dates to apply the datetime
        conversion. May produce significant speed-up when parsing duplicate
        date strings, especially ones with timezone offsets.
    
        .. versionadded:: 0.25.0
    iterator : bool, default False
        Return TextFileReader object for iteration or getting chunks with
        ``get_chunk()``.
    
        .. versionchanged:: 1.2
    
           ``TextFileReader`` is a context manager.
    chunksize : int, optional
        Return TextFileReader object for iteration.
        See the `IO Tools docs
        <https://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_
        for more information on ``iterator`` and ``chunksize``.
    
        .. versionchanged:: 1.2
    
           ``TextFileReader`` is a context manager.
    compression : str or dict, default 'infer'
        For on-the-fly decompression of on-disk data. If 'infer' and 'filepath_or_buffer' is
        path-like, then detect compression from the following extensions: '.gz',
        '.bz2', '.zip', '.xz', '.zst', '.tar', '.tar.gz', '.tar.xz' or '.tar.bz2'
        (otherwise no compression).
        If using 'zip' or 'tar', the ZIP file must contain only one data file to be read in.
        Set to ``None`` for no decompression.
        Can also be a dict with key ``'method'`` set
        to one of {``'zip'``, ``'gzip'``, ``'bz2'``, ``'zstd'``, ``'tar'``} and other
        key-value pairs are forwarded to
        ``zipfile.ZipFile``, ``gzip.GzipFile``,
        ``bz2.BZ2File``, ``zstandard.ZstdDecompressor`` or
        ``tarfile.TarFile``, respectively.
        As an example, the following could be passed for Zstandard decompression using a
        custom compression dictionary:
        ``compression={'method': 'zstd', 'dict_data': my_compression_dict}``.
    
            .. versionadded:: 1.5.0
                Added support for `.tar` files.
    
        .. versionchanged:: 1.4.0 Zstandard support.
    
    thousands : str, optional
        Thousands separator.
    decimal : str, default '.'
        Character to recognize as decimal point (e.g. use ',' for European data).
    lineterminator : str (length 1), optional
        Character to break file into lines. Only valid with C parser.
    quotechar : str (length 1), optional
        The character used to denote the start and end of a quoted item. Quoted
        items can include the delimiter and it will be ignored.
    quoting : int or csv.QUOTE_* instance, default 0
        Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of
        QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
    doublequote : bool, default ``True``
       When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate
       whether or not to interpret two consecutive quotechar elements INSIDE a
       field as a single ``quotechar`` element.
    escapechar : str (length 1), optional
        One-character string used to escape other characters.
    comment : str, optional
        Indicates remainder of line should not be parsed. If found at the beginning
        of a line, the line will be ignored altogether. This parameter must be a
        single character. Like empty lines (as long as ``skip_blank_lines=True``),
        fully commented lines are ignored by the parameter `header` but not by
        `skiprows`. For example, if ``comment='#'``, parsing
        ``#empty\na,b,c\n1,2,3`` with ``header=0`` will result in 'a,b,c' being
        treated as the header.
    encoding : str, optional
        Encoding to use for UTF when reading/writing (ex. 'utf-8'). `List of Python
        standard encodings
        <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ .
    
        .. versionchanged:: 1.2
    
           When ``encoding`` is ``None``, ``errors="replace"`` is passed to
           ``open()``. Otherwise, ``errors="strict"`` is passed to ``open()``.
           This behavior was previously only the case for ``engine="python"``.
    
        .. versionchanged:: 1.3.0
    
           ``encoding_errors`` is a new argument. ``encoding`` has no longer an
           influence on how encoding errors are handled.
    
    encoding_errors : str, optional, default "strict"
        How encoding errors are treated. `List of possible values
        <https://docs.python.org/3/library/codecs.html#error-handlers>`_ .
    
        .. versionadded:: 1.3.0
    
    dialect : str or csv.Dialect, optional
        If provided, this parameter will override values (default or not) for the
        following parameters: `delimiter`, `doublequote`, `escapechar`,
        `skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to
        override values, a ParserWarning will be issued. See csv.Dialect
        documentation for more details.
    error_bad_lines : bool, optional, default ``None``
        Lines with too many fields (e.g. a csv line with too many commas) will by
        default cause an exception to be raised, and no DataFrame will be returned.
        If False, then these "bad lines" will be dropped from the DataFrame that is
        returned.
    
        .. deprecated:: 1.3.0
           The ``on_bad_lines`` parameter should be used instead to specify behavior upon
           encountering a bad line instead.
    warn_bad_lines : bool, optional, default ``None``
        If error_bad_lines is False, and warn_bad_lines is True, a warning for each
        "bad line" will be output.
    
        .. deprecated:: 1.3.0
           The ``on_bad_lines`` parameter should be used instead to specify behavior upon
           encountering a bad line instead.
    on_bad_lines : {'error', 'warn', 'skip'} or callable, default 'error'
        Specifies what to do upon encountering a bad line (a line with too many fields).
        Allowed values are :
    
            - 'error', raise an Exception when a bad line is encountered.
            - 'warn', raise a warning when a bad line is encountered and skip that line.
            - 'skip', skip bad lines without raising or warning when they are encountered.
    
        .. versionadded:: 1.3.0
    
        .. versionadded:: 1.4.0
    
            - callable, function with signature
              ``(bad_line: list[str]) -> list[str] | None`` that will process a single
              bad line. ``bad_line`` is a list of strings split by the ``sep``.
              If the function returns ``None``, the bad line will be ignored.
              If the function returns a new list of strings with more elements than
              expected, a ``ParserWarning`` will be emitted while dropping extra elements.
              Only supported when ``engine="python"``
    
    delim_whitespace : bool, default False
        Specifies whether or not whitespace (e.g. ``' '`` or ``'    '``) will be
        used as the sep. Equivalent to setting ``sep='\s+'``. If this option
        is set to True, nothing should be passed in for the ``delimiter``
        parameter.
    low_memory : bool, default True
        Internally process the file in chunks, resulting in lower memory use
        while parsing, but possibly mixed type inference.  To ensure no mixed
        types either set False, or specify the type with the `dtype` parameter.
        Note that the entire file is read into a single DataFrame regardless,
        use the `chunksize` or `iterator` parameter to return the data in chunks.
        (Only valid with C parser).
    memory_map : bool, default False
        If a filepath is provided for `filepath_or_buffer`, map the file object
        directly onto memory and access the data directly from there. Using this
        option can improve performance because there is no longer any I/O overhead.
    float_precision : str, optional
        Specifies which converter the C engine should use for floating-point
        values. The options are ``None`` or 'high' for the ordinary converter,
        'legacy' for the original lower precision pandas converter, and
        'round_trip' for the round-trip converter.
    
        .. versionchanged:: 1.2
    
    storage_options : dict, optional
        Extra options that make sense for a particular storage connection, e.g.
        host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
        are forwarded to ``urllib.request.Request`` as header options. For other
        URLs (e.g. starting with "s3://", and "gcs://") the key-value pairs are
        forwarded to ``fsspec.open``. Please see ``fsspec`` and ``urllib`` for more
        details, and for more examples on storage options refer `here
        <https://pandas.pydata.org/docs/user_guide/io.html?
        highlight=storage_options#reading-writing-remote-files>`_.
    
        .. versionadded:: 1.2
    
    Returns
    -------
    DataFrame or TextParser
        A comma-separated values (csv) file is returned as two-dimensional
        data structure with labeled axes.
    
    See Also
    --------
    DataFrame.to_csv : Write DataFrame to a comma-separated values (csv) file.
    read_csv : Read a comma-separated values (csv) file into DataFrame.
    read_fwf : Read a table of fixed-width formatted lines into DataFrame.
    
    Examples
    --------
    >>> pd.read_csv('data.csv')  # doctest: +SKIP

In [15]:
# my answer
pd.read_csv("WorldCupMatches.csv", 
            sep = ",", # Separator in the data
            index_col="Year", # Set a variable to the index
            usecols = ["Year", "Stage", "Stadium"], # Only request specific columns
            nrows = 10, # only read in n-rows of the data 
            na_values = "nan",
            skiprows = np.arange(1, 50),
            parse_dates=True, # Parse all date features as datatime
            low_memory=True) # read the file in chunks for lower memory use (useful on large data)

JSON Data

JSON (short for JavaScript Object Notation) has become one of the most used data formats in Data Science. The main reason is that JSONs are the primary way data gets transfered by HTTP request between web browsers and other applications. So we will see a lot of JSON data when querying APIs.

Let's see an example of:

  • Saving a DataFrame as JSON
  • Loading a JSON in your Python environments
In [9]:
# load the csv
# notice this is a different dataset
d_wc = pd.read_csv("WorldCups.csv") 
d_wc
Out[9]:
Year Country Winner Runners-Up Third Fourth GoalsScored QualifiedTeams MatchesPlayed Attendance
0 1930 Uruguay Uruguay Argentina USA Yugoslavia 70 13 18 590.549
1 1934 Italy Italy Czechoslovakia Germany Austria 70 16 17 363.000
2 1938 France Italy Hungary Brazil Sweden 84 15 18 375.700
3 1950 Brazil Uruguay Brazil Sweden Spain 88 13 22 1.045.246
4 1954 Switzerland Germany FR Hungary Austria Uruguay 140 16 26 768.607
5 1958 Sweden Brazil Sweden France Germany FR 126 16 35 819.810
6 1962 Chile Brazil Czechoslovakia Chile Yugoslavia 89 16 32 893.172
7 1966 England England Germany FR Portugal Soviet Union 89 16 32 1.563.135
8 1970 Mexico Brazil Italy Germany FR Uruguay 95 16 32 1.603.975
9 1974 Germany Germany FR Netherlands Poland Brazil 97 16 38 1.865.753
10 1978 Argentina Argentina Netherlands Brazil Italy 102 16 38 1.545.791
11 1982 Spain Italy Germany FR Poland France 146 24 52 2.109.723
12 1986 Mexico Argentina Germany FR France Belgium 132 24 52 2.394.031
13 1990 Italy Germany FR Argentina Italy England 115 24 52 2.516.215
14 1994 USA Brazil Italy Sweden Bulgaria 141 24 52 3.587.538
15 1998 France France Brazil Croatia Netherlands 171 32 64 2.785.100
16 2002 Korea/Japan Brazil Germany Turkey Korea Republic 161 32 64 2.705.197
17 2006 Germany Italy France Germany Portugal 147 32 64 3.359.439
18 2010 South Africa Spain Netherlands Germany Uruguay 145 32 64 3.178.856
19 2014 Brazil Germany Argentina Netherlands Brazil 171 32 64 3.386.810
In [10]:
# let's first see what a json looks like. It is a dictionary!
d_wc.to_json()
Out[10]:
'{"Year":{"0":1930,"1":1934,"2":1938,"3":1950,"4":1954,"5":1958,"6":1962,"7":1966,"8":1970,"9":1974,"10":1978,"11":1982,"12":1986,"13":1990,"14":1994,"15":1998,"16":2002,"17":2006,"18":2010,"19":2014},"Country":{"0":"Uruguay","1":"Italy","2":"France","3":"Brazil","4":"Switzerland","5":"Sweden","6":"Chile","7":"England","8":"Mexico","9":"Germany","10":"Argentina","11":"Spain","12":"Mexico","13":"Italy","14":"USA","15":"France","16":"Korea\\/Japan","17":"Germany","18":"South Africa","19":"Brazil"},"Winner":{"0":"Uruguay","1":"Italy","2":"Italy","3":"Uruguay","4":"Germany FR","5":"Brazil","6":"Brazil","7":"England","8":"Brazil","9":"Germany FR","10":"Argentina","11":"Italy","12":"Argentina","13":"Germany FR","14":"Brazil","15":"France","16":"Brazil","17":"Italy","18":"Spain","19":"Germany"},"Runners-Up":{"0":"Argentina","1":"Czechoslovakia","2":"Hungary","3":"Brazil","4":"Hungary","5":"Sweden","6":"Czechoslovakia","7":"Germany FR","8":"Italy","9":"Netherlands","10":"Netherlands","11":"Germany FR","12":"Germany FR","13":"Argentina","14":"Italy","15":"Brazil","16":"Germany","17":"France","18":"Netherlands","19":"Argentina"},"Third":{"0":"USA","1":"Germany","2":"Brazil","3":"Sweden","4":"Austria","5":"France","6":"Chile","7":"Portugal","8":"Germany FR","9":"Poland","10":"Brazil","11":"Poland","12":"France","13":"Italy","14":"Sweden","15":"Croatia","16":"Turkey","17":"Germany","18":"Germany","19":"Netherlands"},"Fourth":{"0":"Yugoslavia","1":"Austria","2":"Sweden","3":"Spain","4":"Uruguay","5":"Germany FR","6":"Yugoslavia","7":"Soviet Union","8":"Uruguay","9":"Brazil","10":"Italy","11":"France","12":"Belgium","13":"England","14":"Bulgaria","15":"Netherlands","16":"Korea Republic","17":"Portugal","18":"Uruguay","19":"Brazil"},"GoalsScored":{"0":70,"1":70,"2":84,"3":88,"4":140,"5":126,"6":89,"7":89,"8":95,"9":97,"10":102,"11":146,"12":132,"13":115,"14":141,"15":171,"16":161,"17":147,"18":145,"19":171},"QualifiedTeams":{"0":13,"1":16,"2":15,"3":13,"4":16,"5":16,"6":16,"7":16,"8":16,"9":16,"10":16,"11":24,"12":24,"13":24,"14":24,"15":32,"16":32,"17":32,"18":32,"19":32},"MatchesPlayed":{"0":18,"1":17,"2":18,"3":22,"4":26,"5":35,"6":32,"7":32,"8":32,"9":38,"10":38,"11":52,"12":52,"13":52,"14":52,"15":64,"16":64,"17":64,"18":64,"19":64},"Attendance":{"0":"590.549","1":"363.000","2":"375.700","3":"1.045.246","4":"768.607","5":"819.810","6":"893.172","7":"1.563.135","8":"1.603.975","9":"1.865.753","10":"1.545.791","11":"2.109.723","12":"2.394.031","13":"2.516.215","14":"3.587.538","15":"2.785.100","16":"2.705.197","17":"3.359.439","18":"3.178.856","19":"3.386.810"}}'
In [11]:
# you can also save a json by record (row-wise as we learned)
d_wc.to_json(orient="records")
Out[11]:
'[{"Year":1930,"Country":"Uruguay","Winner":"Uruguay","Runners-Up":"Argentina","Third":"USA","Fourth":"Yugoslavia","GoalsScored":70,"QualifiedTeams":13,"MatchesPlayed":18,"Attendance":"590.549"},{"Year":1934,"Country":"Italy","Winner":"Italy","Runners-Up":"Czechoslovakia","Third":"Germany","Fourth":"Austria","GoalsScored":70,"QualifiedTeams":16,"MatchesPlayed":17,"Attendance":"363.000"},{"Year":1938,"Country":"France","Winner":"Italy","Runners-Up":"Hungary","Third":"Brazil","Fourth":"Sweden","GoalsScored":84,"QualifiedTeams":15,"MatchesPlayed":18,"Attendance":"375.700"},{"Year":1950,"Country":"Brazil","Winner":"Uruguay","Runners-Up":"Brazil","Third":"Sweden","Fourth":"Spain","GoalsScored":88,"QualifiedTeams":13,"MatchesPlayed":22,"Attendance":"1.045.246"},{"Year":1954,"Country":"Switzerland","Winner":"Germany FR","Runners-Up":"Hungary","Third":"Austria","Fourth":"Uruguay","GoalsScored":140,"QualifiedTeams":16,"MatchesPlayed":26,"Attendance":"768.607"},{"Year":1958,"Country":"Sweden","Winner":"Brazil","Runners-Up":"Sweden","Third":"France","Fourth":"Germany FR","GoalsScored":126,"QualifiedTeams":16,"MatchesPlayed":35,"Attendance":"819.810"},{"Year":1962,"Country":"Chile","Winner":"Brazil","Runners-Up":"Czechoslovakia","Third":"Chile","Fourth":"Yugoslavia","GoalsScored":89,"QualifiedTeams":16,"MatchesPlayed":32,"Attendance":"893.172"},{"Year":1966,"Country":"England","Winner":"England","Runners-Up":"Germany FR","Third":"Portugal","Fourth":"Soviet Union","GoalsScored":89,"QualifiedTeams":16,"MatchesPlayed":32,"Attendance":"1.563.135"},{"Year":1970,"Country":"Mexico","Winner":"Brazil","Runners-Up":"Italy","Third":"Germany FR","Fourth":"Uruguay","GoalsScored":95,"QualifiedTeams":16,"MatchesPlayed":32,"Attendance":"1.603.975"},{"Year":1974,"Country":"Germany","Winner":"Germany FR","Runners-Up":"Netherlands","Third":"Poland","Fourth":"Brazil","GoalsScored":97,"QualifiedTeams":16,"MatchesPlayed":38,"Attendance":"1.865.753"},{"Year":1978,"Country":"Argentina","Winner":"Argentina","Runners-Up":"Netherlands","Third":"Brazil","Fourth":"Italy","GoalsScored":102,"QualifiedTeams":16,"MatchesPlayed":38,"Attendance":"1.545.791"},{"Year":1982,"Country":"Spain","Winner":"Italy","Runners-Up":"Germany FR","Third":"Poland","Fourth":"France","GoalsScored":146,"QualifiedTeams":24,"MatchesPlayed":52,"Attendance":"2.109.723"},{"Year":1986,"Country":"Mexico","Winner":"Argentina","Runners-Up":"Germany FR","Third":"France","Fourth":"Belgium","GoalsScored":132,"QualifiedTeams":24,"MatchesPlayed":52,"Attendance":"2.394.031"},{"Year":1990,"Country":"Italy","Winner":"Germany FR","Runners-Up":"Argentina","Third":"Italy","Fourth":"England","GoalsScored":115,"QualifiedTeams":24,"MatchesPlayed":52,"Attendance":"2.516.215"},{"Year":1994,"Country":"USA","Winner":"Brazil","Runners-Up":"Italy","Third":"Sweden","Fourth":"Bulgaria","GoalsScored":141,"QualifiedTeams":24,"MatchesPlayed":52,"Attendance":"3.587.538"},{"Year":1998,"Country":"France","Winner":"France","Runners-Up":"Brazil","Third":"Croatia","Fourth":"Netherlands","GoalsScored":171,"QualifiedTeams":32,"MatchesPlayed":64,"Attendance":"2.785.100"},{"Year":2002,"Country":"Korea\\/Japan","Winner":"Brazil","Runners-Up":"Germany","Third":"Turkey","Fourth":"Korea Republic","GoalsScored":161,"QualifiedTeams":32,"MatchesPlayed":64,"Attendance":"2.705.197"},{"Year":2006,"Country":"Germany","Winner":"Italy","Runners-Up":"France","Third":"Germany","Fourth":"Portugal","GoalsScored":147,"QualifiedTeams":32,"MatchesPlayed":64,"Attendance":"3.359.439"},{"Year":2010,"Country":"South Africa","Winner":"Spain","Runners-Up":"Netherlands","Third":"Germany","Fourth":"Uruguay","GoalsScored":145,"QualifiedTeams":32,"MatchesPlayed":64,"Attendance":"3.178.856"},{"Year":2014,"Country":"Brazil","Winner":"Germany","Runners-Up":"Argentina","Third":"Netherlands","Fourth":"Brazil","GoalsScored":171,"QualifiedTeams":32,"MatchesPlayed":64,"Attendance":"3.386.810"}]'
In [27]:
# see dictionary here
d_wc.to_dict(orient='records')
Out[27]:
[{'Year': 1930,
  'Country': 'Uruguay',
  'Winner': 'Uruguay',
  'Runners-Up': 'Argentina',
  'Third': 'USA',
  'Fourth': 'Yugoslavia',
  'GoalsScored': 70,
  'QualifiedTeams': 13,
  'MatchesPlayed': 18,
  'Attendance': '590.549'},
 {'Year': 1934,
  'Country': 'Italy',
  'Winner': 'Italy',
  'Runners-Up': 'Czechoslovakia',
  'Third': 'Germany',
  'Fourth': 'Austria',
  'GoalsScored': 70,
  'QualifiedTeams': 16,
  'MatchesPlayed': 17,
  'Attendance': '363.000'},
 {'Year': 1938,
  'Country': 'France',
  'Winner': 'Italy',
  'Runners-Up': 'Hungary',
  'Third': 'Brazil',
  'Fourth': 'Sweden',
  'GoalsScored': 84,
  'QualifiedTeams': 15,
  'MatchesPlayed': 18,
  'Attendance': '375.700'},
 {'Year': 1950,
  'Country': 'Brazil',
  'Winner': 'Uruguay',
  'Runners-Up': 'Brazil',
  'Third': 'Sweden',
  'Fourth': 'Spain',
  'GoalsScored': 88,
  'QualifiedTeams': 13,
  'MatchesPlayed': 22,
  'Attendance': '1.045.246'},
 {'Year': 1954,
  'Country': 'Switzerland',
  'Winner': 'Germany FR',
  'Runners-Up': 'Hungary',
  'Third': 'Austria',
  'Fourth': 'Uruguay',
  'GoalsScored': 140,
  'QualifiedTeams': 16,
  'MatchesPlayed': 26,
  'Attendance': '768.607'},
 {'Year': 1958,
  'Country': 'Sweden',
  'Winner': 'Brazil',
  'Runners-Up': 'Sweden',
  'Third': 'France',
  'Fourth': 'Germany FR',
  'GoalsScored': 126,
  'QualifiedTeams': 16,
  'MatchesPlayed': 35,
  'Attendance': '819.810'},
 {'Year': 1962,
  'Country': 'Chile',
  'Winner': 'Brazil',
  'Runners-Up': 'Czechoslovakia',
  'Third': 'Chile',
  'Fourth': 'Yugoslavia',
  'GoalsScored': 89,
  'QualifiedTeams': 16,
  'MatchesPlayed': 32,
  'Attendance': '893.172'},
 {'Year': 1966,
  'Country': 'England',
  'Winner': 'England',
  'Runners-Up': 'Germany FR',
  'Third': 'Portugal',
  'Fourth': 'Soviet Union',
  'GoalsScored': 89,
  'QualifiedTeams': 16,
  'MatchesPlayed': 32,
  'Attendance': '1.563.135'},
 {'Year': 1970,
  'Country': 'Mexico',
  'Winner': 'Brazil',
  'Runners-Up': 'Italy',
  'Third': 'Germany FR',
  'Fourth': 'Uruguay',
  'GoalsScored': 95,
  'QualifiedTeams': 16,
  'MatchesPlayed': 32,
  'Attendance': '1.603.975'},
 {'Year': 1974,
  'Country': 'Germany',
  'Winner': 'Germany FR',
  'Runners-Up': 'Netherlands',
  'Third': 'Poland',
  'Fourth': 'Brazil',
  'GoalsScored': 97,
  'QualifiedTeams': 16,
  'MatchesPlayed': 38,
  'Attendance': '1.865.753'},
 {'Year': 1978,
  'Country': 'Argentina',
  'Winner': 'Argentina',
  'Runners-Up': 'Netherlands',
  'Third': 'Brazil',
  'Fourth': 'Italy',
  'GoalsScored': 102,
  'QualifiedTeams': 16,
  'MatchesPlayed': 38,
  'Attendance': '1.545.791'},
 {'Year': 1982,
  'Country': 'Spain',
  'Winner': 'Italy',
  'Runners-Up': 'Germany FR',
  'Third': 'Poland',
  'Fourth': 'France',
  'GoalsScored': 146,
  'QualifiedTeams': 24,
  'MatchesPlayed': 52,
  'Attendance': '2.109.723'},
 {'Year': 1986,
  'Country': 'Mexico',
  'Winner': 'Argentina',
  'Runners-Up': 'Germany FR',
  'Third': 'France',
  'Fourth': 'Belgium',
  'GoalsScored': 132,
  'QualifiedTeams': 24,
  'MatchesPlayed': 52,
  'Attendance': '2.394.031'},
 {'Year': 1990,
  'Country': 'Italy',
  'Winner': 'Germany FR',
  'Runners-Up': 'Argentina',
  'Third': 'Italy',
  'Fourth': 'England',
  'GoalsScored': 115,
  'QualifiedTeams': 24,
  'MatchesPlayed': 52,
  'Attendance': '2.516.215'},
 {'Year': 1994,
  'Country': 'USA',
  'Winner': 'Brazil',
  'Runners-Up': 'Italy',
  'Third': 'Sweden',
  'Fourth': 'Bulgaria',
  'GoalsScored': 141,
  'QualifiedTeams': 24,
  'MatchesPlayed': 52,
  'Attendance': '3.587.538'},
 {'Year': 1998,
  'Country': 'France',
  'Winner': 'France',
  'Runners-Up': 'Brazil',
  'Third': 'Croatia',
  'Fourth': 'Netherlands',
  'GoalsScored': 171,
  'QualifiedTeams': 32,
  'MatchesPlayed': 64,
  'Attendance': '2.785.100'},
 {'Year': 2002,
  'Country': 'Korea/Japan',
  'Winner': 'Brazil',
  'Runners-Up': 'Germany',
  'Third': 'Turkey',
  'Fourth': 'Korea Republic',
  'GoalsScored': 161,
  'QualifiedTeams': 32,
  'MatchesPlayed': 64,
  'Attendance': '2.705.197'},
 {'Year': 2006,
  'Country': 'Germany',
  'Winner': 'Italy',
  'Runners-Up': 'France',
  'Third': 'Germany',
  'Fourth': 'Portugal',
  'GoalsScored': 147,
  'QualifiedTeams': 32,
  'MatchesPlayed': 64,
  'Attendance': '3.359.439'},
 {'Year': 2010,
  'Country': 'South Africa',
  'Winner': 'Spain',
  'Runners-Up': 'Netherlands',
  'Third': 'Germany',
  'Fourth': 'Uruguay',
  'GoalsScored': 145,
  'QualifiedTeams': 32,
  'MatchesPlayed': 64,
  'Attendance': '3.178.856'},
 {'Year': 2014,
  'Country': 'Brazil',
  'Winner': 'Germany',
  'Runners-Up': 'Argentina',
  'Third': 'Netherlands',
  'Fourth': 'Brazil',
  'GoalsScored': 171,
  'QualifiedTeams': 32,
  'MatchesPlayed': 64,
  'Attendance': '3.386.810'}]
In [12]:
# save and look in the file
d_wc.to_json("worldcup.json", orient="records")
In [13]:
# load
d = pd.read_json("worldcup.json")

# see 
d.head()
Out[13]:
Year Country Winner Runners-Up Third Fourth GoalsScored QualifiedTeams MatchesPlayed Attendance
0 1930 Uruguay Uruguay Argentina USA Yugoslavia 70 13 18 590.549
1 1934 Italy Italy Czechoslovakia Germany Austria 70 16 17 363.000
2 1938 France Italy Hungary Brazil Sweden 84 15 18 375.700
3 1950 Brazil Uruguay Brazil Sweden Spain 88 13 22 1.045.246
4 1954 Switzerland Germany FR Hungary Austria Uruguay 140 16 26 768.607

Data Type Conversion

Pandas also provides methods to convert your data frame in native Python Data structures. Those can be useful tool for accessing your dataframe in different format, for example, as a dictionary or a list.

In [14]:
# to a dictionary
d_wc.to_dict()
Out[14]:
{'Year': {0: 1930,
  1: 1934,
  2: 1938,
  3: 1950,
  4: 1954,
  5: 1958,
  6: 1962,
  7: 1966,
  8: 1970,
  9: 1974,
  10: 1978,
  11: 1982,
  12: 1986,
  13: 1990,
  14: 1994,
  15: 1998,
  16: 2002,
  17: 2006,
  18: 2010,
  19: 2014},
 'Country': {0: 'Uruguay',
  1: 'Italy',
  2: 'France',
  3: 'Brazil',
  4: 'Switzerland',
  5: 'Sweden',
  6: 'Chile',
  7: 'England',
  8: 'Mexico',
  9: 'Germany',
  10: 'Argentina',
  11: 'Spain',
  12: 'Mexico',
  13: 'Italy',
  14: 'USA',
  15: 'France',
  16: 'Korea/Japan',
  17: 'Germany',
  18: 'South Africa',
  19: 'Brazil'},
 'Winner': {0: 'Uruguay',
  1: 'Italy',
  2: 'Italy',
  3: 'Uruguay',
  4: 'Germany FR',
  5: 'Brazil',
  6: 'Brazil',
  7: 'England',
  8: 'Brazil',
  9: 'Germany FR',
  10: 'Argentina',
  11: 'Italy',
  12: 'Argentina',
  13: 'Germany FR',
  14: 'Brazil',
  15: 'France',
  16: 'Brazil',
  17: 'Italy',
  18: 'Spain',
  19: 'Germany'},
 'Runners-Up': {0: 'Argentina',
  1: 'Czechoslovakia',
  2: 'Hungary',
  3: 'Brazil',
  4: 'Hungary',
  5: 'Sweden',
  6: 'Czechoslovakia',
  7: 'Germany FR',
  8: 'Italy',
  9: 'Netherlands',
  10: 'Netherlands',
  11: 'Germany FR',
  12: 'Germany FR',
  13: 'Argentina',
  14: 'Italy',
  15: 'Brazil',
  16: 'Germany',
  17: 'France',
  18: 'Netherlands',
  19: 'Argentina'},
 'Third': {0: 'USA',
  1: 'Germany',
  2: 'Brazil',
  3: 'Sweden',
  4: 'Austria',
  5: 'France',
  6: 'Chile',
  7: 'Portugal',
  8: 'Germany FR',
  9: 'Poland',
  10: 'Brazil',
  11: 'Poland',
  12: 'France',
  13: 'Italy',
  14: 'Sweden',
  15: 'Croatia',
  16: 'Turkey',
  17: 'Germany',
  18: 'Germany',
  19: 'Netherlands'},
 'Fourth': {0: 'Yugoslavia',
  1: 'Austria',
  2: 'Sweden',
  3: 'Spain',
  4: 'Uruguay',
  5: 'Germany FR',
  6: 'Yugoslavia',
  7: 'Soviet Union',
  8: 'Uruguay',
  9: 'Brazil',
  10: 'Italy',
  11: 'France',
  12: 'Belgium',
  13: 'England',
  14: 'Bulgaria',
  15: 'Netherlands',
  16: 'Korea Republic',
  17: 'Portugal',
  18: 'Uruguay',
  19: 'Brazil'},
 'GoalsScored': {0: 70,
  1: 70,
  2: 84,
  3: 88,
  4: 140,
  5: 126,
  6: 89,
  7: 89,
  8: 95,
  9: 97,
  10: 102,
  11: 146,
  12: 132,
  13: 115,
  14: 141,
  15: 171,
  16: 161,
  17: 147,
  18: 145,
  19: 171},
 'QualifiedTeams': {0: 13,
  1: 16,
  2: 15,
  3: 13,
  4: 16,
  5: 16,
  6: 16,
  7: 16,
  8: 16,
  9: 16,
  10: 16,
  11: 24,
  12: 24,
  13: 24,
  14: 24,
  15: 32,
  16: 32,
  17: 32,
  18: 32,
  19: 32},
 'MatchesPlayed': {0: 18,
  1: 17,
  2: 18,
  3: 22,
  4: 26,
  5: 35,
  6: 32,
  7: 32,
  8: 32,
  9: 38,
  10: 38,
  11: 52,
  12: 52,
  13: 52,
  14: 52,
  15: 64,
  16: 64,
  17: 64,
  18: 64,
  19: 64},
 'Attendance': {0: '590.549',
  1: '363.000',
  2: '375.700',
  3: '1.045.246',
  4: '768.607',
  5: '819.810',
  6: '893.172',
  7: '1.563.135',
  8: '1.603.975',
  9: '1.865.753',
  10: '1.545.791',
  11: '2.109.723',
  12: '2.394.031',
  13: '2.516.215',
  14: '3.587.538',
  15: '2.785.100',
  16: '2.705.197',
  17: '3.359.439',
  18: '3.178.856',
  19: '3.386.810'}}
In [15]:
# to a numpy array
d_wc.values[0]
Out[15]:
array([1930, 'Uruguay', 'Uruguay', 'Argentina', 'USA', 'Yugoslavia', 70,
       13, 18, '590.549'], dtype=object)
In [16]:
# To a nested list (which is a method from numpy)
d_wc.values[0].tolist()
Out[16]:
[1930,
 'Uruguay',
 'Uruguay',
 'Argentina',
 'USA',
 'Yugoslavia',
 70,
 13,
 18,
 '590.549']

Previewing and Describing your data

You just loaded your first dataset in Python. Let's see some useful tools to preview you data.

pandas.head() : print first n rows

In [17]:
d_wc.head()
Out[17]:
Year Country Winner Runners-Up Third Fourth GoalsScored QualifiedTeams MatchesPlayed Attendance
0 1930 Uruguay Uruguay Argentina USA Yugoslavia 70 13 18 590.549
1 1934 Italy Italy Czechoslovakia Germany Austria 70 16 17 363.000
2 1938 France Italy Hungary Brazil Sweden 84 15 18 375.700
3 1950 Brazil Uruguay Brazil Sweden Spain 88 13 22 1.045.246
4 1954 Switzerland Germany FR Hungary Austria Uruguay 140 16 26 768.607

pandas.tail() : print last n rows

In [18]:
d_wc.tail(10)
Out[18]:
Year Country Winner Runners-Up Third Fourth GoalsScored QualifiedTeams MatchesPlayed Attendance
10 1978 Argentina Argentina Netherlands Brazil Italy 102 16 38 1.545.791
11 1982 Spain Italy Germany FR Poland France 146 24 52 2.109.723
12 1986 Mexico Argentina Germany FR France Belgium 132 24 52 2.394.031
13 1990 Italy Germany FR Argentina Italy England 115 24 52 2.516.215
14 1994 USA Brazil Italy Sweden Bulgaria 141 24 52 3.587.538
15 1998 France France Brazil Croatia Netherlands 171 32 64 2.785.100
16 2002 Korea/Japan Brazil Germany Turkey Korea Republic 161 32 64 2.705.197
17 2006 Germany Italy France Germany Portugal 147 32 64 3.359.439
18 2010 South Africa Spain Netherlands Germany Uruguay 145 32 64 3.178.856
19 2014 Brazil Germany Argentina Netherlands Brazil 171 32 64 3.386.810

pandas.sample() : get a sample

In [19]:
d_wc.sample(5)
Out[19]:
Year Country Winner Runners-Up Third Fourth GoalsScored QualifiedTeams MatchesPlayed Attendance
13 1990 Italy Germany FR Argentina Italy England 115 24 52 2.516.215
16 2002 Korea/Japan Brazil Germany Turkey Korea Republic 161 32 64 2.705.197
17 2006 Germany Italy France Germany Portugal 147 32 64 3.359.439
3 1950 Brazil Uruguay Brazil Sweden Spain 88 13 22 1.045.246
10 1978 Argentina Argentina Netherlands Brazil Italy 102 16 38 1.545.791

pandas.info() : Prints information about a DataFrame

In [20]:
d_wc.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 20 entries, 0 to 19
Data columns (total 10 columns):
 #   Column          Non-Null Count  Dtype 
---  ------          --------------  ----- 
 0   Year            20 non-null     int64 
 1   Country         20 non-null     object
 2   Winner          20 non-null     object
 3   Runners-Up      20 non-null     object
 4   Third           20 non-null     object
 5   Fourth          20 non-null     object
 6   GoalsScored     20 non-null     int64 
 7   QualifiedTeams  20 non-null     int64 
 8   MatchesPlayed   20 non-null     int64 
 9   Attendance      20 non-null     object
dtypes: int64(4), object(6)
memory usage: 1.7+ KB

pandas.dtypes : Atttributed to see data types

In [21]:
d_wc.dtypes
Out[21]:
Year               int64
Country           object
Winner            object
Runners-Up        object
Third             object
Fourth            object
GoalsScored        int64
QualifiedTeams     int64
MatchesPlayed      int64
Attendance        object
dtype: object

pandas.describe() : Summarize all numeric the columns

In [22]:
d_wc.describe()
Out[22]:
Year GoalsScored QualifiedTeams MatchesPlayed
count 20.000000 20.000000 20.000000 20.000000
mean 1974.800000 118.950000 21.250000 41.800000
std 25.582889 32.972836 7.268352 17.218717
min 1930.000000 70.000000 13.000000 17.000000
25% 1957.000000 89.000000 16.000000 30.500000
50% 1976.000000 120.500000 16.000000 38.000000
75% 1995.000000 145.250000 26.000000 55.000000
max 2014.000000 171.000000 32.000000 64.000000

pandas.describe() : Summarize a particular column

In [23]:
d_wc["Third"].describe()
Out[23]:
count          20
unique         14
top       Germany
freq            3
Name: Third, dtype: object

Practice

Using the "WorldCups.csv" data, answer the following:

  • Which two teams played the last game in the data?
  • Which Country hosted more Worldcups editions?
  • How many different countries have won a Worldcup?
  • What is the range of years to which we have data for?
In [62]:
# Add your response here
d = pd.read_csv("WorldCups.csv")
# last world cup game is always the final. Germany vs Argentina
d.tail(1)
# host
d.Country.describe()
# winner unique
d.Winner.describe()
# range
d.Year.describe()
Out[62]:
Year Country Winner Runners-Up Third Fourth GoalsScored QualifiedTeams MatchesPlayed Attendance
0 1930 Uruguay Uruguay Argentina USA Yugoslavia 70 13 18 590.549
1 1934 Italy Italy Czechoslovakia Germany Austria 70 16 17 363.000
2 1938 France Italy Hungary Brazil Sweden 84 15 18 375.700
3 1950 Brazil Uruguay Brazil Sweden Spain 88 13 22 1.045.246
4 1954 Switzerland Germany FR Hungary Austria Uruguay 140 16 26 768.607
5 1958 Sweden Brazil Sweden France Germany FR 126 16 35 819.810
6 1962 Chile Brazil Czechoslovakia Chile Yugoslavia 89 16 32 893.172
7 1966 England England Germany FR Portugal Soviet Union 89 16 32 1.563.135
8 1970 Mexico Brazil Italy Germany FR Uruguay 95 16 32 1.603.975
9 1974 Germany Germany FR Netherlands Poland Brazil 97 16 38 1.865.753
10 1978 Argentina Argentina Netherlands Brazil Italy 102 16 38 1.545.791
11 1982 Spain Italy Germany FR Poland France 146 24 52 2.109.723
12 1986 Mexico Argentina Germany FR France Belgium 132 24 52 2.394.031
13 1990 Italy Germany FR Argentina Italy England 115 24 52 2.516.215
14 1994 USA Brazil Italy Sweden Bulgaria 141 24 52 3.587.538
15 1998 France France Brazil Croatia Netherlands 171 32 64 2.785.100
16 2002 Korea/Japan Brazil Germany Turkey Korea Republic 161 32 64 2.705.197
17 2006 Germany Italy France Germany Portugal 147 32 64 3.359.439
18 2010 South Africa Spain Netherlands Germany Uruguay 145 32 64 3.178.856
19 2014 Brazil Germany Argentina Netherlands Brazil 171 32 64 3.386.810