File size: 4,445 Bytes
437b9b3
4802dce
 
c825a3b
4802dce
c825a3b
 
 
437b9b3
 
c825a3b
 
 
 
 
 
 
437b9b3
c825a3b
 
 
 
 
 
 
 
 
 
 
 
 
 
437b9b3
c825a3b
 
 
 
51dfc61
c825a3b
437b9b3
 
4802dce
 
 
 
8f446c9
437b9b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1d07d1d
 
 
 
 
 
 
 
 
 
 
 
 
09bca82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
437b9b3
 
 
 
 
 
 
 
09bca82
 
437b9b3
 
 
 
 
72fe1bb
437b9b3
72fe1bb
437b9b3
 
72fe1bb
 
 
 
437b9b3
 
4802dce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
---
license: mit
task_categories:
  - object-detection
tags:
  - disability-parking
  - accessibility
  - streetscape
dataset_info:
  features:
    - name: image
      dtype: image
    - name: width
      dtype: int32
    - name: height
      dtype: int32
    - name: objects
      sequence:
        - name: bbox
          sequence: float32
          length: 4
        - name: category
          dtype: int64
        - name: area
          dtype: float32
        - name: iscrowd
          dtype: bool
        - name: id
          dtype: int64
        - name: segmentation
          sequence:
            sequence: float32
  splits:
    - name: train
      num_examples: 3688
    - name: test
      num_examples: 717
    - name: validation
      num_examples: 720
---

# AccessParkCV

<strong>AccessParkCV</strong> is a deep learning pipeline that detects and characterizes the width of disability parking spaces from orthorectified aerial imagery. We publish a dataset of 7,069 labeled parking spaces (and 4,693 labeled access aisles), which we used to train the models making AccessParkCV possible.

(This repo contains the data in a HuggingFace format. For raw COCO format, see [link](https://huggingface.co/datasets/makeabilitylab/AccessParkCV_coco)).

## Dataset Description

This is an object detection dataset with 8 classes:

- objects
- access_aisle
- curbside
- dp_no_aisle
- dp_one_aisle
- dp_two_aisle
- one_aisle
- two_aisle

## Dataset Structure

### Data Fields

- `image`: PIL Image object
- `width`: Image width in pixels
- `height`: Image height in pixels  
- `objects`: Dictionary containing:
  - `bbox`: List of bounding boxes in [x_min, y_min, x_max, y_max] format
  - `category`: List of category IDs
  - `area`: List of bounding box areas
  - `iscrowd`: List of crowd flags (boolean)
  - `id`: List of annotation IDs
  - `segmentation`: List of polygon segmentations (each as list of [x1,y1,x2,y2,...] coordinates)

### Category IDs to Category

| Category ID | Class |
|-----------------|-----------------|
| 0 | objects |
| 1 | access_aisle |
| 2 | curbside |
| 3 | dp\_no\_aisle |
| 4 | dp\_one\_aisle |
| 5 | dp\_two\_aisle |
| 6 | one\_aisle |
| 7 | two\_aisle |

### Data Sources
| Region          | Lat/Long Bounding Coordinates               | Source Resolution | # images in dataset |
|-----------------|---------------------------------------------|-------------------|---------------------|
| Seattle         | (47.9572, -122.4489),  (47.4091, -122.1551) | 3 inch/pixel      |               2,790 |
| Washington D.C. | (38.9979, -77.1179),  (38.7962, -76.9008)   | 3 inch/pixel      |               1,801 |
| Spring Hill     | (35.7943, -87.0034),  (35.6489, -86.8447)   | Unknown           |                 534 |
| Total           |                                             |                   |               5,125 |

### Class Composition
| Class          | Quantity in dataset |
|----------------|---------------------|
| access\_aisle  |               4,693 |
| curbside       |                  36 |
| dp\_no\_aisle  |                 300 |
| dp\_one\_aisle |               2,790 |
| dp\_two\_aisle |                 402 |
| one\_aisle     |               3,424 |
| two\_aisle     |                 117 |
| Total          |              11,762 |

### 


### Data Splits

| Split | Examples |
|-------|----------|
| train | 3688 |
| test | 717 |
| valid | 720 |

### Class splits

## Usage

```python
from datasets import load_dataset

train_dataset = load_dataset("makeabilitylab/disabilityparking", split="train", streaming=True)

example = next(iter(train_dataset))

# Example of accessing an item
image = example["image"]
bboxes = example["objects"]["bbox"]
categories = example["objects"]["category"]
segmentations = example["objects"]["segmentation"]  # Polygon coordinates
```

## Citation

```bibtex
@inproceedings{hwang_wherecanIpark,
  title={Where Can I Park? Understanding Human Perspectives and Scalably Detecting Disability Parking from Aerial Imagery},
  author={Hwang, Jared and Li, Chu and Kang, Hanbyul and Hosseini, Maryam and Froehlich, Jon E.},
  booktitle={The 27th International ACM SIGACCESS Conference on Computers and Accessibility},
  series={ASSETS '25},
  pages={20 pages},
  year={2025},
  month={October},
  address={Denver, CO, USA},
  publisher={ACM},
  location={New York, NY, USA},
  doi={10.1145/3663547.3746377},
  url={https://doi.org/10.1145/3663547.3746377}
}
```