A closed-end real estate fund is an investment vehicle with a finite life (call it anywhere from 5 to 12 years, plus extension options). These types of funds have a specific timeframe for raising capital, investing, harvesting the investments they have made, and then distributing proceeds to investors. This is in contrast to an open-ended fund, also known as an "evergreen" fund, which has an infinite life and can accept investments throughout its lifespan.
As a result of these differences, closed-end funds are often used for opportunistic or value-add opportunities where the defined strategy is to buy, fix/develop, and then sell, whereas open-ended funds are often used for core opportunities, where the assets are intended to be held indefinitely for income. Neither fund structure is inherently good or bad; each has its benefits and drawbacks.
However, the perceived weighting of these benefits and drawbacks shifts during market cycles. Since global real estate markets started to turn downward in 2022, the ability to be patient and think long-term has become a key ingredient for survival. You may have done everything you said you would do perfectly, but the market may not be there to grant you the liquidity you had originally planned for.
Now the question becomes: How patient can and should we be?
In my opinion, the greatest opportunities exist for (1) the larger firms that have a strong balance sheet and defensible income-producing properties and (2) the smaller, nimble firms that can capitalize on the dislocation in the market (and aren't overly burdened with legacy assets that are sucking up resources and capacity).
This perspective is true of other sectors as well. This weekend, venture capitalist Chris Dixon of a16z wrote a post titled, "The long game for crypto." In it, he alludes to the current market downturn (ETH is down nearly 60% from its all-time high) and says that "we play the long game at a16z and a16z crypto: Our funds are structured with 10+ year horizons because building new industries takes time."
The fact that he wrote this post says a lot, I think, about the psyche of investors today. The perceived weighting has changed, and people are now investing and building more for the future. As the late Charlie Munger once said, "The big money is not in the buying and the selling, but in the waiting."
Cover photo by KAi'S PHOTOGRAPHY on Unsplash

On January 23, a Waymo autonomous vehicle hit a child in Santa Monica, California. The age and identity of the child are not public, but "minor injuries" were reported. Waymo responded with this blog post where they essentially argued that "if this had been a human driver, the accident would have been worse."
The event occurred when the pedestrian suddenly entered the roadway from behind a tall SUV, moving directly into our vehicle's path. Our technology immediately detected the individual as soon as they began to emerge from behind the stopped vehicle. The Waymo Driver braked hard, reducing speed from approximately 17 mph to under 6 mph [~9.7 km/h] before contact was made.
To put this in perspective, our peer-reviewed model shows that a fully attentive human driver in this same situation would have made contact with the pedestrian at approximately 14 mph. This significant reduction in impact speed and severity is a demonstration of the material safety benefit of the Waymo Driver.
All car accidents causing human injury are unfortunate, but car accidents involving AVs are obviously more noteworthy right now. In my mind, it makes sense that a Waymo should be more responsive than a human driver in the face of a pedestrian jumping out into a roadway.
But being "less bad" is not going to win everyone over. The accident is being investigated to ensure "the Waymo AV exercised appropriate caution given, among other things, its proximity to the elementary school during drop off hours, and the presence of young pedestrians and other potential vulnerable road users.”

Towards the end of last year, Meta released SAM 3, which stands for the third version of its Segment Anything Model. The way it generally works is that it allows you to detect, edit, and experiment with things in images and videos. For example, if you were looking at a video of a street, you could ask it to find all the scooters (which I did below), count the number of pedestrians wearing black pants, blur all the license plates on the cars, and so on.

A closed-end real estate fund is an investment vehicle with a finite life (call it anywhere from 5 to 12 years, plus extension options). These types of funds have a specific timeframe for raising capital, investing, harvesting the investments they have made, and then distributing proceeds to investors. This is in contrast to an open-ended fund, also known as an "evergreen" fund, which has an infinite life and can accept investments throughout its lifespan.
As a result of these differences, closed-end funds are often used for opportunistic or value-add opportunities where the defined strategy is to buy, fix/develop, and then sell, whereas open-ended funds are often used for core opportunities, where the assets are intended to be held indefinitely for income. Neither fund structure is inherently good or bad; each has its benefits and drawbacks.
However, the perceived weighting of these benefits and drawbacks shifts during market cycles. Since global real estate markets started to turn downward in 2022, the ability to be patient and think long-term has become a key ingredient for survival. You may have done everything you said you would do perfectly, but the market may not be there to grant you the liquidity you had originally planned for.
Now the question becomes: How patient can and should we be?
In my opinion, the greatest opportunities exist for (1) the larger firms that have a strong balance sheet and defensible income-producing properties and (2) the smaller, nimble firms that can capitalize on the dislocation in the market (and aren't overly burdened with legacy assets that are sucking up resources and capacity).
This perspective is true of other sectors as well. This weekend, venture capitalist Chris Dixon of a16z wrote a post titled, "The long game for crypto." In it, he alludes to the current market downturn (ETH is down nearly 60% from its all-time high) and says that "we play the long game at a16z and a16z crypto: Our funds are structured with 10+ year horizons because building new industries takes time."
The fact that he wrote this post says a lot, I think, about the psyche of investors today. The perceived weighting has changed, and people are now investing and building more for the future. As the late Charlie Munger once said, "The big money is not in the buying and the selling, but in the waiting."
Cover photo by KAi'S PHOTOGRAPHY on Unsplash

On January 23, a Waymo autonomous vehicle hit a child in Santa Monica, California. The age and identity of the child are not public, but "minor injuries" were reported. Waymo responded with this blog post where they essentially argued that "if this had been a human driver, the accident would have been worse."
The event occurred when the pedestrian suddenly entered the roadway from behind a tall SUV, moving directly into our vehicle's path. Our technology immediately detected the individual as soon as they began to emerge from behind the stopped vehicle. The Waymo Driver braked hard, reducing speed from approximately 17 mph to under 6 mph [~9.7 km/h] before contact was made.
To put this in perspective, our peer-reviewed model shows that a fully attentive human driver in this same situation would have made contact with the pedestrian at approximately 14 mph. This significant reduction in impact speed and severity is a demonstration of the material safety benefit of the Waymo Driver.
All car accidents causing human injury are unfortunate, but car accidents involving AVs are obviously more noteworthy right now. In my mind, it makes sense that a Waymo should be more responsive than a human driver in the face of a pedestrian jumping out into a roadway.
But being "less bad" is not going to win everyone over. The accident is being investigated to ensure "the Waymo AV exercised appropriate caution given, among other things, its proximity to the elementary school during drop off hours, and the presence of young pedestrians and other potential vulnerable road users.”

Towards the end of last year, Meta released SAM 3, which stands for the third version of its Segment Anything Model. The way it generally works is that it allows you to detect, edit, and experiment with things in images and videos. For example, if you were looking at a video of a street, you could ask it to find all the scooters (which I did below), count the number of pedestrians wearing black pants, blur all the license plates on the cars, and so on.

The headline is suboptimal for AVs, but it's very possible the Waymo did everything it could, and did it better than any one of us could ever do. We shall see.
Cover photo by Andri Aeschlimann on Unsplash
This is immediately useful for a company like Meta because it allows for object-level modifications across its content creation platforms. So if you took a video of someone dancing and you desperately wanted to give them a bobblehead, SAM 3, I'm told, would allow you to quickly do that. Other AI models, such as Gemini, can also segment, but supposedly the SAM models are better and more precise at this specific task.
Beyond bobblehead videos, the potential of this model seems enormous for real estate, cities, and, of course, many other things. Using the above image as an example, you can quickly imagine SAM 3 being used to count and track modal splits across a city, and then make planning decisions based on real-time data.
People are also using it for real estate purposes. Pair the model with satellite images, and you can ask it to tell you how many houses have a pool, which houses recently had their roof replaced (and have solar panels), how many cars are parked on a street, how many cars are parked at Canadian Tire, and the average building lot coverage in an area.
You could also use it to swap out finishes in a real estate listing (including in videos), and get material/area takeoffs ahead of a construction project. I don't know for sure, but I would also imagine that this model would make a great building condition inspector. Come to think of it, I'd love a SAM 3 that could walk our construction sites and document every little detail!
Of course, a lot of these use cases are already being tackled. But the models are getting that much better. And that will lead to even more innovation.
Cover photo by Above Horizon on Unsplash
The headline is suboptimal for AVs, but it's very possible the Waymo did everything it could, and did it better than any one of us could ever do. We shall see.
Cover photo by Andri Aeschlimann on Unsplash
This is immediately useful for a company like Meta because it allows for object-level modifications across its content creation platforms. So if you took a video of someone dancing and you desperately wanted to give them a bobblehead, SAM 3, I'm told, would allow you to quickly do that. Other AI models, such as Gemini, can also segment, but supposedly the SAM models are better and more precise at this specific task.
Beyond bobblehead videos, the potential of this model seems enormous for real estate, cities, and, of course, many other things. Using the above image as an example, you can quickly imagine SAM 3 being used to count and track modal splits across a city, and then make planning decisions based on real-time data.
People are also using it for real estate purposes. Pair the model with satellite images, and you can ask it to tell you how many houses have a pool, which houses recently had their roof replaced (and have solar panels), how many cars are parked on a street, how many cars are parked at Canadian Tire, and the average building lot coverage in an area.
You could also use it to swap out finishes in a real estate listing (including in videos), and get material/area takeoffs ahead of a construction project. I don't know for sure, but I would also imagine that this model would make a great building condition inspector. Come to think of it, I'd love a SAM 3 that could walk our construction sites and document every little detail!
Of course, a lot of these use cases are already being tackled. But the models are getting that much better. And that will lead to even more innovation.
Cover photo by Above Horizon on Unsplash
Share Dialog
Share Dialog
Share Dialog
Share Dialog
Share Dialog
Share Dialog